Test Report: Hyperkit_macOS 19461

                    
                      ee4f5fb2e73abafca70b3598ab7977372efc25a8:2024-08-16:35814
                    
                

Test fail (21/215)

x
+
TestOffline (195.54s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-087000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p offline-docker-087000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit : exit status 80 (3m10.120656883s)

                                                
                                                
-- stdout --
	* [offline-docker-087000] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19461
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1276/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1276/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "offline-docker-087000" primary control-plane node in "offline-docker-087000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "offline-docker-087000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 10:54:18.163703    7449 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:54:18.164034    7449 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:54:18.164040    7449 out.go:358] Setting ErrFile to fd 2...
	I0816 10:54:18.164043    7449 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:54:18.164240    7449 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1276/.minikube/bin
	I0816 10:54:18.166060    7449 out.go:352] Setting JSON to false
	I0816 10:54:18.193473    7449 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":5028,"bootTime":1723825830,"procs":431,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0816 10:54:18.193566    7449 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 10:54:18.251195    7449 out.go:177] * [offline-docker-087000] minikube v1.33.1 on Darwin 14.6.1
	I0816 10:54:18.297137    7449 notify.go:220] Checking for updates...
	I0816 10:54:18.324060    7449 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 10:54:18.361052    7449 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1276/kubeconfig
	I0816 10:54:18.380933    7449 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0816 10:54:18.408056    7449 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 10:54:18.427850    7449 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1276/.minikube
	I0816 10:54:18.455358    7449 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 10:54:18.476077    7449 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 10:54:18.504856    7449 out.go:177] * Using the hyperkit driver based on user configuration
	I0816 10:54:18.547081    7449 start.go:297] selected driver: hyperkit
	I0816 10:54:18.547109    7449 start.go:901] validating driver "hyperkit" against <nil>
	I0816 10:54:18.547133    7449 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 10:54:18.551765    7449 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:54:18.551878    7449 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19461-1276/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0816 10:54:18.560130    7449 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0816 10:54:18.564405    7449 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:54:18.564426    7449 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0816 10:54:18.564456    7449 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 10:54:18.564674    7449 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 10:54:18.564711    7449 cni.go:84] Creating CNI manager for ""
	I0816 10:54:18.564727    7449 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0816 10:54:18.564731    7449 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0816 10:54:18.564795    7449 start.go:340] cluster config:
	{Name:offline-docker-087000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:offline-docker-087000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 10:54:18.564878    7449 iso.go:125] acquiring lock: {Name:mkb430b43b5e7a8a683454482519837ed996c78d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:54:18.632778    7449 out.go:177] * Starting "offline-docker-087000" primary control-plane node in "offline-docker-087000" cluster
	I0816 10:54:18.674598    7449 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 10:54:18.674630    7449 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0816 10:54:18.674647    7449 cache.go:56] Caching tarball of preloaded images
	I0816 10:54:18.674757    7449 preload.go:172] Found /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0816 10:54:18.674766    7449 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 10:54:18.675050    7449 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/offline-docker-087000/config.json ...
	I0816 10:54:18.675069    7449 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/offline-docker-087000/config.json: {Name:mka9d06153b0933922d3a7ab3bbdcb74ab29f21b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:54:18.675429    7449 start.go:360] acquireMachinesLock for offline-docker-087000: {Name:mk0510f0432b1e24fd07d899155e85dff8756d5a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:54:18.675486    7449 start.go:364] duration metric: took 43.78µs to acquireMachinesLock for "offline-docker-087000"
	I0816 10:54:18.675510    7449 start.go:93] Provisioning new machine with config: &{Name:offline-docker-087000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.0 ClusterName:offline-docker-087000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:54:18.675560    7449 start.go:125] createHost starting for "" (driver="hyperkit")
	I0816 10:54:18.697556    7449 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0816 10:54:18.697710    7449 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:54:18.697753    7449 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:54:18.706427    7449 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53814
	I0816 10:54:18.706786    7449 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:54:18.707182    7449 main.go:141] libmachine: Using API Version  1
	I0816 10:54:18.707192    7449 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:54:18.707444    7449 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:54:18.707555    7449 main.go:141] libmachine: (offline-docker-087000) Calling .GetMachineName
	I0816 10:54:18.707662    7449 main.go:141] libmachine: (offline-docker-087000) Calling .DriverName
	I0816 10:54:18.707771    7449 start.go:159] libmachine.API.Create for "offline-docker-087000" (driver="hyperkit")
	I0816 10:54:18.707795    7449 client.go:168] LocalClient.Create starting
	I0816 10:54:18.707839    7449 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem
	I0816 10:54:18.707894    7449 main.go:141] libmachine: Decoding PEM data...
	I0816 10:54:18.707911    7449 main.go:141] libmachine: Parsing certificate...
	I0816 10:54:18.707995    7449 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem
	I0816 10:54:18.708034    7449 main.go:141] libmachine: Decoding PEM data...
	I0816 10:54:18.708049    7449 main.go:141] libmachine: Parsing certificate...
	I0816 10:54:18.708061    7449 main.go:141] libmachine: Running pre-create checks...
	I0816 10:54:18.708071    7449 main.go:141] libmachine: (offline-docker-087000) Calling .PreCreateCheck
	I0816 10:54:18.708145    7449 main.go:141] libmachine: (offline-docker-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:54:18.708314    7449 main.go:141] libmachine: (offline-docker-087000) Calling .GetConfigRaw
	I0816 10:54:18.719347    7449 main.go:141] libmachine: Creating machine...
	I0816 10:54:18.719372    7449 main.go:141] libmachine: (offline-docker-087000) Calling .Create
	I0816 10:54:18.719625    7449 main.go:141] libmachine: (offline-docker-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:54:18.719899    7449 main.go:141] libmachine: (offline-docker-087000) DBG | I0816 10:54:18.719599    7470 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19461-1276/.minikube
	I0816 10:54:18.720025    7449 main.go:141] libmachine: (offline-docker-087000) Downloading /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19461-1276/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0816 10:54:19.191331    7449 main.go:141] libmachine: (offline-docker-087000) DBG | I0816 10:54:19.191238    7470 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/offline-docker-087000/id_rsa...
	I0816 10:54:19.381210    7449 main.go:141] libmachine: (offline-docker-087000) DBG | I0816 10:54:19.381079    7470 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/offline-docker-087000/offline-docker-087000.rawdisk...
	I0816 10:54:19.381242    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Writing magic tar header
	I0816 10:54:19.381254    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Writing SSH key tar header
	I0816 10:54:19.381465    7449 main.go:141] libmachine: (offline-docker-087000) DBG | I0816 10:54:19.381423    7470 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/offline-docker-087000 ...
	I0816 10:54:19.897867    7449 main.go:141] libmachine: (offline-docker-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:54:19.897898    7449 main.go:141] libmachine: (offline-docker-087000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/offline-docker-087000/hyperkit.pid
	I0816 10:54:19.897914    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Using UUID 98088fdd-d9cc-4d82-be34-ff595d9bdea2
	I0816 10:54:20.061595    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Generated MAC 82:e8:82:bf:f5:0
	I0816 10:54:20.061618    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-087000
	I0816 10:54:20.061652    7449 main.go:141] libmachine: (offline-docker-087000) DBG | 2024/08/16 10:54:20 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/offline-docker-087000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"98088fdd-d9cc-4d82-be34-ff595d9bdea2", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/offline-docker-087000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/offline-docker-087000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/offline-docker-087000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLi
ne:"", process:(*os.Process)(nil)}
	I0816 10:54:20.061692    7449 main.go:141] libmachine: (offline-docker-087000) DBG | 2024/08/16 10:54:20 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/offline-docker-087000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"98088fdd-d9cc-4d82-be34-ff595d9bdea2", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/offline-docker-087000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/offline-docker-087000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/offline-docker-087000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLi
ne:"", process:(*os.Process)(nil)}
	I0816 10:54:20.061743    7449 main.go:141] libmachine: (offline-docker-087000) DBG | 2024/08/16 10:54:20 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/offline-docker-087000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "98088fdd-d9cc-4d82-be34-ff595d9bdea2", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/offline-docker-087000/offline-docker-087000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/offline-docker-087000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/offline-docker-087000/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/offline-docker-087000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/offline-docker-087000/bzimage,
/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/offline-docker-087000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-087000"}
	I0816 10:54:20.061797    7449 main.go:141] libmachine: (offline-docker-087000) DBG | 2024/08/16 10:54:20 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/offline-docker-087000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 98088fdd-d9cc-4d82-be34-ff595d9bdea2 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/offline-docker-087000/offline-docker-087000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/offline-docker-087000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/offline-docker-087000/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/offline-docker-087000/console-ring -f kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/offline-docker-087000/bzimage,/Users/jenkins/minikube-integration/19461-1276/.minikube/machi
nes/offline-docker-087000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-087000"
	I0816 10:54:20.061810    7449 main.go:141] libmachine: (offline-docker-087000) DBG | 2024/08/16 10:54:20 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0816 10:54:20.064829    7449 main.go:141] libmachine: (offline-docker-087000) DBG | 2024/08/16 10:54:20 DEBUG: hyperkit: Pid is 7497
	I0816 10:54:20.065283    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Attempt 0
	I0816 10:54:20.065298    7449 main.go:141] libmachine: (offline-docker-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:54:20.065385    7449 main.go:141] libmachine: (offline-docker-087000) DBG | hyperkit pid from json: 7497
	I0816 10:54:20.066722    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Searching for 82:e8:82:bf:f5:0 in /var/db/dhcpd_leases ...
	I0816 10:54:20.066888    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:54:20.066899    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:54:20.066929    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:54:20.066957    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:54:20.066975    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:54:20.066988    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:54:20.067002    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:54:20.067038    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:54:20.067060    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:54:20.067080    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:54:20.067096    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:54:20.067110    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:54:20.067122    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:54:20.067134    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:54:20.067147    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:54:20.067171    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:54:20.067184    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:54:20.067197    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:54:20.072957    7449 main.go:141] libmachine: (offline-docker-087000) DBG | 2024/08/16 10:54:20 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0816 10:54:20.200403    7449 main.go:141] libmachine: (offline-docker-087000) DBG | 2024/08/16 10:54:20 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/offline-docker-087000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0816 10:54:20.201017    7449 main.go:141] libmachine: (offline-docker-087000) DBG | 2024/08/16 10:54:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:54:20.201029    7449 main.go:141] libmachine: (offline-docker-087000) DBG | 2024/08/16 10:54:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:54:20.201037    7449 main.go:141] libmachine: (offline-docker-087000) DBG | 2024/08/16 10:54:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:54:20.201046    7449 main.go:141] libmachine: (offline-docker-087000) DBG | 2024/08/16 10:54:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:54:20.576622    7449 main.go:141] libmachine: (offline-docker-087000) DBG | 2024/08/16 10:54:20 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0816 10:54:20.576641    7449 main.go:141] libmachine: (offline-docker-087000) DBG | 2024/08/16 10:54:20 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0816 10:54:20.691521    7449 main.go:141] libmachine: (offline-docker-087000) DBG | 2024/08/16 10:54:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:54:20.691546    7449 main.go:141] libmachine: (offline-docker-087000) DBG | 2024/08/16 10:54:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:54:20.691565    7449 main.go:141] libmachine: (offline-docker-087000) DBG | 2024/08/16 10:54:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:54:20.691576    7449 main.go:141] libmachine: (offline-docker-087000) DBG | 2024/08/16 10:54:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:54:20.692399    7449 main.go:141] libmachine: (offline-docker-087000) DBG | 2024/08/16 10:54:20 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0816 10:54:20.692409    7449 main.go:141] libmachine: (offline-docker-087000) DBG | 2024/08/16 10:54:20 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0816 10:54:22.068394    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Attempt 1
	I0816 10:54:22.068408    7449 main.go:141] libmachine: (offline-docker-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:54:22.068512    7449 main.go:141] libmachine: (offline-docker-087000) DBG | hyperkit pid from json: 7497
	I0816 10:54:22.069312    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Searching for 82:e8:82:bf:f5:0 in /var/db/dhcpd_leases ...
	I0816 10:54:22.069367    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:54:22.069380    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:54:22.069389    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:54:22.069394    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:54:22.069410    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:54:22.069416    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:54:22.069423    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:54:22.069429    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:54:22.069437    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:54:22.069447    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:54:22.069456    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:54:22.069464    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:54:22.069472    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:54:22.069481    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:54:22.069490    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:54:22.069497    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:54:22.069504    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:54:22.069514    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:54:24.069838    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Attempt 2
	I0816 10:54:24.069855    7449 main.go:141] libmachine: (offline-docker-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:54:24.069892    7449 main.go:141] libmachine: (offline-docker-087000) DBG | hyperkit pid from json: 7497
	I0816 10:54:24.070800    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Searching for 82:e8:82:bf:f5:0 in /var/db/dhcpd_leases ...
	I0816 10:54:24.070814    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:54:24.070838    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:54:24.070849    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:54:24.070856    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:54:24.070862    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:54:24.070907    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:54:24.070921    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:54:24.070933    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:54:24.070942    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:54:24.070965    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:54:24.070985    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:54:24.070995    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:54:24.071004    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:54:24.071022    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:54:24.071036    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:54:24.071049    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:54:24.071057    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:54:24.071066    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:54:26.056911    7449 main.go:141] libmachine: (offline-docker-087000) DBG | 2024/08/16 10:54:26 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0816 10:54:26.057086    7449 main.go:141] libmachine: (offline-docker-087000) DBG | 2024/08/16 10:54:26 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0816 10:54:26.057096    7449 main.go:141] libmachine: (offline-docker-087000) DBG | 2024/08/16 10:54:26 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0816 10:54:26.071000    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Attempt 3
	I0816 10:54:26.071011    7449 main.go:141] libmachine: (offline-docker-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:54:26.071118    7449 main.go:141] libmachine: (offline-docker-087000) DBG | hyperkit pid from json: 7497
	I0816 10:54:26.071993    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Searching for 82:e8:82:bf:f5:0 in /var/db/dhcpd_leases ...
	I0816 10:54:26.072022    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:54:26.072030    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:54:26.072038    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:54:26.072051    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:54:26.072058    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:54:26.072065    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:54:26.072071    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:54:26.072077    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:54:26.072094    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:54:26.072110    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:54:26.072135    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:54:26.072151    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:54:26.072160    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:54:26.072166    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:54:26.072181    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:54:26.072191    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:54:26.072198    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:54:26.072214    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:54:26.077115    7449 main.go:141] libmachine: (offline-docker-087000) DBG | 2024/08/16 10:54:26 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0816 10:54:28.072940    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Attempt 4
	I0816 10:54:28.072965    7449 main.go:141] libmachine: (offline-docker-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:54:28.073048    7449 main.go:141] libmachine: (offline-docker-087000) DBG | hyperkit pid from json: 7497
	I0816 10:54:28.073845    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Searching for 82:e8:82:bf:f5:0 in /var/db/dhcpd_leases ...
	I0816 10:54:28.073945    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:54:28.073960    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:54:28.073986    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:54:28.073996    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:54:28.074005    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:54:28.074013    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:54:28.074022    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:54:28.074031    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:54:28.074038    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:54:28.074046    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:54:28.074055    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:54:28.074070    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:54:28.074082    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:54:28.074091    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:54:28.074097    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:54:28.074110    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:54:28.074123    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:54:28.074133    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:54:30.075545    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Attempt 5
	I0816 10:54:30.075556    7449 main.go:141] libmachine: (offline-docker-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:54:30.075648    7449 main.go:141] libmachine: (offline-docker-087000) DBG | hyperkit pid from json: 7497
	I0816 10:54:30.076411    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Searching for 82:e8:82:bf:f5:0 in /var/db/dhcpd_leases ...
	I0816 10:54:30.076467    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:54:30.076481    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:54:30.076491    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:54:30.076498    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:54:30.076505    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:54:30.076513    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:54:30.076520    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:54:30.076527    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:54:30.076539    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:54:30.076546    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:54:30.076552    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:54:30.076559    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:54:30.076565    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:54:30.076571    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:54:30.076580    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:54:30.076589    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:54:30.076595    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:54:30.076605    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:54:32.077638    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Attempt 6
	I0816 10:54:32.077652    7449 main.go:141] libmachine: (offline-docker-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:54:32.077754    7449 main.go:141] libmachine: (offline-docker-087000) DBG | hyperkit pid from json: 7497
	I0816 10:54:32.078510    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Searching for 82:e8:82:bf:f5:0 in /var/db/dhcpd_leases ...
	I0816 10:54:32.078558    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:54:32.078575    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:54:32.078593    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:54:32.078603    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:54:32.078615    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:54:32.078625    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:54:32.078643    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:54:32.078662    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:54:32.078669    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:54:32.078688    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:54:32.078702    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:54:32.078721    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:54:32.078742    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:54:32.078754    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:54:32.078767    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:54:32.078780    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:54:32.078790    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:54:32.078799    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:54:34.080684    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Attempt 7
	I0816 10:54:34.080699    7449 main.go:141] libmachine: (offline-docker-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:54:34.080814    7449 main.go:141] libmachine: (offline-docker-087000) DBG | hyperkit pid from json: 7497
	I0816 10:54:34.081556    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Searching for 82:e8:82:bf:f5:0 in /var/db/dhcpd_leases ...
	I0816 10:54:34.081629    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:54:34.081640    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:54:34.081650    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:54:34.081666    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:54:34.081672    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:54:34.081684    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:54:34.081693    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:54:34.081709    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:54:34.081726    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:54:34.081735    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:54:34.081744    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:54:34.081751    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:54:34.081759    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:54:34.081769    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:54:34.081779    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:54:34.081801    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:54:34.081812    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:54:34.081824    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:54:36.083732    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Attempt 8
	I0816 10:54:36.083748    7449 main.go:141] libmachine: (offline-docker-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:54:36.083809    7449 main.go:141] libmachine: (offline-docker-087000) DBG | hyperkit pid from json: 7497
	I0816 10:54:36.084886    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Searching for 82:e8:82:bf:f5:0 in /var/db/dhcpd_leases ...
	I0816 10:54:36.084900    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:54:36.084915    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:54:36.084930    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:54:36.084939    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:54:36.084951    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:54:36.084960    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:54:36.084968    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:54:36.084974    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:54:36.084981    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:54:36.084987    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:54:36.084995    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:54:36.085003    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:54:36.085025    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:54:36.085038    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:54:36.085047    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:54:36.085057    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:54:36.085065    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:54:36.085072    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:54:38.087030    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Attempt 9
	I0816 10:54:38.087048    7449 main.go:141] libmachine: (offline-docker-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:54:38.087133    7449 main.go:141] libmachine: (offline-docker-087000) DBG | hyperkit pid from json: 7497
	I0816 10:54:38.087918    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Searching for 82:e8:82:bf:f5:0 in /var/db/dhcpd_leases ...
	I0816 10:54:38.088000    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:54:38.088009    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:54:38.088019    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:54:38.088025    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:54:38.088043    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:54:38.088058    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:54:38.088065    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:54:38.088074    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:54:38.088097    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:54:38.088106    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:54:38.088112    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:54:38.088121    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:54:38.088127    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:54:38.088135    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:54:38.088144    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:54:38.088152    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:54:38.088159    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:54:38.088166    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:54:40.089916    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Attempt 10
	I0816 10:54:40.089933    7449 main.go:141] libmachine: (offline-docker-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:54:40.090013    7449 main.go:141] libmachine: (offline-docker-087000) DBG | hyperkit pid from json: 7497
	I0816 10:54:40.090812    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Searching for 82:e8:82:bf:f5:0 in /var/db/dhcpd_leases ...
	I0816 10:54:40.090874    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:54:40.090887    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:54:40.090896    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:54:40.090904    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:54:40.090911    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:54:40.090916    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:54:40.090922    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:54:40.090930    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:54:40.090936    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:54:40.090942    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:54:40.090960    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:54:40.090969    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:54:40.090976    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:54:40.090984    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:54:40.090991    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:54:40.090998    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:54:40.091006    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:54:40.091018    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:54:42.092007    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Attempt 11
	I0816 10:54:42.092033    7449 main.go:141] libmachine: (offline-docker-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:54:42.092044    7449 main.go:141] libmachine: (offline-docker-087000) DBG | hyperkit pid from json: 7497
	I0816 10:54:42.092830    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Searching for 82:e8:82:bf:f5:0 in /var/db/dhcpd_leases ...
	I0816 10:54:42.092865    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:54:42.092877    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:54:42.092896    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:54:42.092902    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:54:42.092909    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:54:42.092920    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:54:42.092928    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:54:42.092944    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:54:42.092960    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:54:42.092973    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:54:42.092988    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:54:42.093000    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:54:42.093007    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:54:42.093013    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:54:42.093031    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:54:42.093043    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:54:42.093058    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:54:42.093066    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:54:44.093985    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Attempt 12
	I0816 10:54:44.093997    7449 main.go:141] libmachine: (offline-docker-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:54:44.094055    7449 main.go:141] libmachine: (offline-docker-087000) DBG | hyperkit pid from json: 7497
	I0816 10:54:44.094857    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Searching for 82:e8:82:bf:f5:0 in /var/db/dhcpd_leases ...
	I0816 10:54:44.094902    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:54:44.094913    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:54:44.094923    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:54:44.094933    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:54:44.094941    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:54:44.094947    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:54:44.094975    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:54:44.094983    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:54:44.094990    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:54:44.094996    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:54:44.095017    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:54:44.095028    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:54:44.095037    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:54:44.095046    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:54:44.095053    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:54:44.095061    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:54:44.095072    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:54:44.095083    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:54:46.097008    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Attempt 13
	I0816 10:54:46.097021    7449 main.go:141] libmachine: (offline-docker-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:54:46.097104    7449 main.go:141] libmachine: (offline-docker-087000) DBG | hyperkit pid from json: 7497
	I0816 10:54:46.097880    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Searching for 82:e8:82:bf:f5:0 in /var/db/dhcpd_leases ...
	I0816 10:54:46.097928    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:54:46.097939    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:54:46.097951    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:54:46.097959    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:54:46.097966    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:54:46.097973    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:54:46.097979    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:54:46.097985    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:54:46.098007    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:54:46.098019    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:54:46.098026    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:54:46.098035    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:54:46.098044    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:54:46.098057    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:54:46.098064    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:54:46.098072    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:54:46.098085    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:54:46.098093    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:54:48.100043    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Attempt 14
	I0816 10:54:48.100058    7449 main.go:141] libmachine: (offline-docker-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:54:48.100125    7449 main.go:141] libmachine: (offline-docker-087000) DBG | hyperkit pid from json: 7497
	I0816 10:54:48.100912    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Searching for 82:e8:82:bf:f5:0 in /var/db/dhcpd_leases ...
	I0816 10:54:48.100962    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:54:48.100975    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:54:48.101008    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:54:48.101021    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:54:48.101045    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:54:48.101075    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:54:48.101082    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:54:48.101089    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:54:48.101096    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:54:48.101104    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:54:48.101111    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:54:48.101118    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:54:48.101125    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:54:48.101131    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:54:48.101140    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:54:48.101152    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:54:48.101161    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:54:48.101169    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:54:50.102646    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Attempt 15
	I0816 10:54:50.102662    7449 main.go:141] libmachine: (offline-docker-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:54:50.102725    7449 main.go:141] libmachine: (offline-docker-087000) DBG | hyperkit pid from json: 7497
	I0816 10:54:50.103536    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Searching for 82:e8:82:bf:f5:0 in /var/db/dhcpd_leases ...
	I0816 10:54:50.103583    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:54:50.103594    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:54:50.103638    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:54:50.103650    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:54:50.103661    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:54:50.103671    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:54:50.103678    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:54:50.103686    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:54:50.103694    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:54:50.103700    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:54:50.103714    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:54:50.103728    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:54:50.103736    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:54:50.103744    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:54:50.103752    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:54:50.103760    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:54:50.103767    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:54:50.103774    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:54:52.104224    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Attempt 16
	I0816 10:54:52.104240    7449 main.go:141] libmachine: (offline-docker-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:54:52.104302    7449 main.go:141] libmachine: (offline-docker-087000) DBG | hyperkit pid from json: 7497
	I0816 10:54:52.105087    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Searching for 82:e8:82:bf:f5:0 in /var/db/dhcpd_leases ...
	I0816 10:54:52.105137    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:54:52.105159    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:54:52.105167    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:54:52.105174    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:54:52.105182    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:54:52.105188    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:54:52.105206    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:54:52.105219    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:54:52.105230    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:54:52.105238    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:54:52.105245    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:54:52.105252    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:54:52.105261    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:54:52.105268    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:54:52.105275    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:54:52.105282    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:54:52.105288    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:54:52.105305    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:54:54.105486    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Attempt 17
	I0816 10:54:54.105499    7449 main.go:141] libmachine: (offline-docker-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:54:54.105581    7449 main.go:141] libmachine: (offline-docker-087000) DBG | hyperkit pid from json: 7497
	I0816 10:54:54.106375    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Searching for 82:e8:82:bf:f5:0 in /var/db/dhcpd_leases ...
	I0816 10:54:54.106404    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:54:54.106412    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:54:54.106421    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:54:54.106435    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:54:54.106442    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:54:54.106450    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:54:54.106456    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:54:54.106463    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:54:54.106478    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:54:54.106492    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:54:54.106501    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:54:54.106507    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:54:54.106519    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:54:54.106534    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:54:54.106543    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:54:54.106551    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:54:54.106567    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:54:54.106580    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:54:56.107531    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Attempt 18
	I0816 10:54:56.107548    7449 main.go:141] libmachine: (offline-docker-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:54:56.107607    7449 main.go:141] libmachine: (offline-docker-087000) DBG | hyperkit pid from json: 7497
	I0816 10:54:56.108373    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Searching for 82:e8:82:bf:f5:0 in /var/db/dhcpd_leases ...
	I0816 10:54:56.108434    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:54:56.108446    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:54:56.108455    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:54:56.108463    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:54:56.108470    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:54:56.108478    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:54:56.108498    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:54:56.108526    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:54:56.108560    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:54:56.108567    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:54:56.108573    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:54:56.108587    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:54:56.108596    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:54:56.108603    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:54:56.108611    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:54:56.108618    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:54:56.108626    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:54:56.108634    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:54:58.110523    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Attempt 19
	I0816 10:54:58.110538    7449 main.go:141] libmachine: (offline-docker-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:54:58.110609    7449 main.go:141] libmachine: (offline-docker-087000) DBG | hyperkit pid from json: 7497
	I0816 10:54:58.111379    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Searching for 82:e8:82:bf:f5:0 in /var/db/dhcpd_leases ...
	I0816 10:54:58.111434    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:54:58.111444    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:54:58.111456    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:54:58.111467    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:54:58.111492    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:54:58.111505    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:54:58.111514    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:54:58.111541    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:54:58.111549    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:54:58.111557    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:54:58.111564    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:54:58.111586    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:54:58.111607    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:54:58.111615    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:54:58.111623    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:54:58.111632    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:54:58.111639    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:54:58.111647    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:55:00.113533    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Attempt 20
	I0816 10:55:00.113559    7449 main.go:141] libmachine: (offline-docker-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:55:00.113615    7449 main.go:141] libmachine: (offline-docker-087000) DBG | hyperkit pid from json: 7497
	I0816 10:55:00.114444    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Searching for 82:e8:82:bf:f5:0 in /var/db/dhcpd_leases ...
	I0816 10:55:00.114492    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:55:00.114508    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:55:00.114527    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:55:00.114537    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:55:00.114547    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:55:00.114552    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:55:00.114567    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:55:00.114580    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:55:00.114588    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:55:00.114597    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:55:00.114620    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:55:00.114632    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:55:00.114686    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:55:00.114695    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:55:00.114702    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:55:00.114709    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:55:00.114717    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:55:00.114724    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:55:02.115394    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Attempt 21
	I0816 10:55:02.115409    7449 main.go:141] libmachine: (offline-docker-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:55:02.115504    7449 main.go:141] libmachine: (offline-docker-087000) DBG | hyperkit pid from json: 7497
	I0816 10:55:02.116275    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Searching for 82:e8:82:bf:f5:0 in /var/db/dhcpd_leases ...
	I0816 10:55:02.116338    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:55:02.116348    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:55:02.116360    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:55:02.116368    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:55:02.116374    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:55:02.116384    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:55:02.116405    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:55:02.116418    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:55:02.116428    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:55:02.116443    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:55:02.116458    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:55:02.116472    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:55:02.116480    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:55:02.116489    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:55:02.116504    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:55:02.116517    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:55:02.116526    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:55:02.116533    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:55:04.118471    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Attempt 22
	I0816 10:55:04.118484    7449 main.go:141] libmachine: (offline-docker-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:55:04.118541    7449 main.go:141] libmachine: (offline-docker-087000) DBG | hyperkit pid from json: 7497
	I0816 10:55:04.119332    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Searching for 82:e8:82:bf:f5:0 in /var/db/dhcpd_leases ...
	I0816 10:55:04.119378    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:55:04.119399    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:55:04.119411    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:55:04.119428    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:55:04.119438    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:55:04.119470    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:55:04.119492    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:55:04.119512    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:55:04.119526    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:55:04.119535    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:55:04.119543    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:55:04.119551    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:55:04.119559    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:55:04.119565    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:55:04.119573    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:55:04.119585    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:55:04.119595    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:55:04.119608    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:55:06.120033    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Attempt 23
	I0816 10:55:06.120045    7449 main.go:141] libmachine: (offline-docker-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:55:06.120090    7449 main.go:141] libmachine: (offline-docker-087000) DBG | hyperkit pid from json: 7497
	I0816 10:55:06.121094    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Searching for 82:e8:82:bf:f5:0 in /var/db/dhcpd_leases ...
	I0816 10:55:06.121131    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:55:06.121140    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:55:06.121154    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:55:06.121160    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:55:06.121166    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:55:06.121172    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:55:06.121178    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:55:06.121204    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:55:06.121215    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:55:06.121234    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:55:06.121245    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:55:06.121252    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:55:06.121259    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:55:06.121268    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:55:06.121275    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:55:06.121283    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:55:06.121290    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:55:06.121297    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:55:08.121754    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Attempt 24
	I0816 10:55:08.121771    7449 main.go:141] libmachine: (offline-docker-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:55:08.121816    7449 main.go:141] libmachine: (offline-docker-087000) DBG | hyperkit pid from json: 7497
	I0816 10:55:08.122661    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Searching for 82:e8:82:bf:f5:0 in /var/db/dhcpd_leases ...
	I0816 10:55:08.122682    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:55:08.122701    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:55:08.122712    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:55:08.122721    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:55:08.122730    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:55:08.122740    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:55:08.122749    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:55:08.122759    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:55:08.122766    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:55:08.122775    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:55:08.122786    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:55:08.122793    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:55:08.122801    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:55:08.122807    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:55:08.122815    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:55:08.122832    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:55:08.122855    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:55:08.122873    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:55:10.123353    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Attempt 25
	I0816 10:55:10.123376    7449 main.go:141] libmachine: (offline-docker-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:55:10.123438    7449 main.go:141] libmachine: (offline-docker-087000) DBG | hyperkit pid from json: 7497
	I0816 10:55:10.124263    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Searching for 82:e8:82:bf:f5:0 in /var/db/dhcpd_leases ...
	I0816 10:55:10.124306    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:55:10.124315    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:55:10.124323    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:55:10.124332    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:55:10.124345    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:55:10.124352    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:55:10.124367    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:55:10.124375    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:55:10.124384    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:55:10.124392    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:55:10.124400    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:55:10.124406    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:55:10.124430    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:55:10.124443    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:55:10.124452    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:55:10.124460    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:55:10.124467    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:55:10.124475    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:55:12.125547    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Attempt 26
	I0816 10:55:12.125559    7449 main.go:141] libmachine: (offline-docker-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:55:12.125626    7449 main.go:141] libmachine: (offline-docker-087000) DBG | hyperkit pid from json: 7497
	I0816 10:55:12.126398    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Searching for 82:e8:82:bf:f5:0 in /var/db/dhcpd_leases ...
	I0816 10:55:12.126432    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:55:12.126447    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:55:12.126460    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:55:12.126469    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:55:12.126476    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:55:12.126482    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:55:12.126490    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:55:12.126496    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:55:12.126502    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:55:12.126510    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:55:12.126516    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:55:12.126531    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:55:12.126538    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:55:12.126544    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:55:12.126550    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:55:12.126556    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:55:12.126561    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:55:12.126576    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:55:14.128524    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Attempt 27
	I0816 10:55:14.128544    7449 main.go:141] libmachine: (offline-docker-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:55:14.128617    7449 main.go:141] libmachine: (offline-docker-087000) DBG | hyperkit pid from json: 7497
	I0816 10:55:14.129415    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Searching for 82:e8:82:bf:f5:0 in /var/db/dhcpd_leases ...
	I0816 10:55:14.129453    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:55:14.129464    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:55:14.129474    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:55:14.129483    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:55:14.129490    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:55:14.129511    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:55:14.129519    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:55:14.129543    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:55:14.129556    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:55:14.129565    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:55:14.129573    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:55:14.129579    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:55:14.129586    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:55:14.129592    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:55:14.129598    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:55:14.129613    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:55:14.129622    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:55:14.129630    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:55:16.131092    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Attempt 28
	I0816 10:55:16.131105    7449 main.go:141] libmachine: (offline-docker-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:55:16.131172    7449 main.go:141] libmachine: (offline-docker-087000) DBG | hyperkit pid from json: 7497
	I0816 10:55:16.131991    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Searching for 82:e8:82:bf:f5:0 in /var/db/dhcpd_leases ...
	I0816 10:55:16.132048    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:55:16.132060    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:55:16.132071    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:55:16.132085    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:55:16.132094    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:55:16.132100    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:55:16.132109    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:55:16.132118    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:55:16.132125    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:55:16.132132    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:55:16.132138    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:55:16.132151    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:55:16.132167    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:55:16.132175    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:55:16.132183    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:55:16.132196    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:55:16.132222    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:55:16.132232    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:55:18.132537    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Attempt 29
	I0816 10:55:18.132553    7449 main.go:141] libmachine: (offline-docker-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:55:18.132616    7449 main.go:141] libmachine: (offline-docker-087000) DBG | hyperkit pid from json: 7497
	I0816 10:55:18.133394    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Searching for 82:e8:82:bf:f5:0 in /var/db/dhcpd_leases ...
	I0816 10:55:18.133435    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:55:18.133450    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:55:18.133462    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:55:18.133482    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:55:18.133493    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:55:18.133507    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:55:18.133523    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:55:18.133535    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:55:18.133543    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:55:18.133552    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:55:18.133559    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:55:18.133568    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:55:18.133583    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:55:18.133595    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:55:18.133610    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:55:18.133619    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:55:18.133635    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:55:18.133650    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:55:20.135570    7449 client.go:171] duration metric: took 1m1.429647497s to LocalClient.Create
	I0816 10:55:22.136817    7449 start.go:128] duration metric: took 1m3.463191819s to createHost
	I0816 10:55:22.136831    7449 start.go:83] releasing machines lock for "offline-docker-087000", held for 1m3.463282458s
	W0816 10:55:22.136866    7449 start.go:714] error starting host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 82:e8:82:bf:f5:0
	I0816 10:55:22.137183    7449 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:55:22.137207    7449 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:55:22.146491    7449 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53850
	I0816 10:55:22.146995    7449 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:55:22.147439    7449 main.go:141] libmachine: Using API Version  1
	I0816 10:55:22.147454    7449 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:55:22.147726    7449 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:55:22.148087    7449 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:55:22.148111    7449 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:55:22.157182    7449 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53852
	I0816 10:55:22.157551    7449 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:55:22.157978    7449 main.go:141] libmachine: Using API Version  1
	I0816 10:55:22.157993    7449 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:55:22.158246    7449 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:55:22.158364    7449 main.go:141] libmachine: (offline-docker-087000) Calling .GetState
	I0816 10:55:22.158448    7449 main.go:141] libmachine: (offline-docker-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:55:22.158519    7449 main.go:141] libmachine: (offline-docker-087000) DBG | hyperkit pid from json: 7497
	I0816 10:55:22.159532    7449 main.go:141] libmachine: (offline-docker-087000) Calling .DriverName
	I0816 10:55:22.200509    7449 out.go:177] * Deleting "offline-docker-087000" in hyperkit ...
	I0816 10:55:22.221299    7449 main.go:141] libmachine: (offline-docker-087000) Calling .Remove
	I0816 10:55:22.221419    7449 main.go:141] libmachine: (offline-docker-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:55:22.221430    7449 main.go:141] libmachine: (offline-docker-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:55:22.221500    7449 main.go:141] libmachine: (offline-docker-087000) DBG | hyperkit pid from json: 7497
	I0816 10:55:22.222451    7449 main.go:141] libmachine: (offline-docker-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:55:22.222506    7449 main.go:141] libmachine: (offline-docker-087000) DBG | waiting for graceful shutdown
	I0816 10:55:23.223450    7449 main.go:141] libmachine: (offline-docker-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:55:23.223526    7449 main.go:141] libmachine: (offline-docker-087000) DBG | hyperkit pid from json: 7497
	I0816 10:55:23.224468    7449 main.go:141] libmachine: (offline-docker-087000) DBG | waiting for graceful shutdown
	I0816 10:55:24.225264    7449 main.go:141] libmachine: (offline-docker-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:55:24.225381    7449 main.go:141] libmachine: (offline-docker-087000) DBG | hyperkit pid from json: 7497
	I0816 10:55:24.227082    7449 main.go:141] libmachine: (offline-docker-087000) DBG | waiting for graceful shutdown
	I0816 10:55:25.227505    7449 main.go:141] libmachine: (offline-docker-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:55:25.227570    7449 main.go:141] libmachine: (offline-docker-087000) DBG | hyperkit pid from json: 7497
	I0816 10:55:25.228386    7449 main.go:141] libmachine: (offline-docker-087000) DBG | waiting for graceful shutdown
	I0816 10:55:26.229808    7449 main.go:141] libmachine: (offline-docker-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:55:26.229888    7449 main.go:141] libmachine: (offline-docker-087000) DBG | hyperkit pid from json: 7497
	I0816 10:55:26.230457    7449 main.go:141] libmachine: (offline-docker-087000) DBG | waiting for graceful shutdown
	I0816 10:55:27.231648    7449 main.go:141] libmachine: (offline-docker-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:55:27.231762    7449 main.go:141] libmachine: (offline-docker-087000) DBG | hyperkit pid from json: 7497
	I0816 10:55:27.232713    7449 main.go:141] libmachine: (offline-docker-087000) DBG | sending sigkill
	I0816 10:55:27.232722    7449 main.go:141] libmachine: (offline-docker-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	W0816 10:55:27.244622    7449 out.go:270] ! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 82:e8:82:bf:f5:0
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 82:e8:82:bf:f5:0
	I0816 10:55:27.244637    7449 start.go:729] Will try again in 5 seconds ...
	I0816 10:55:27.254378    7449 main.go:141] libmachine: (offline-docker-087000) DBG | 2024/08/16 10:55:27 WARN : hyperkit: failed to read stderr: EOF
	I0816 10:55:27.254405    7449 main.go:141] libmachine: (offline-docker-087000) DBG | 2024/08/16 10:55:27 WARN : hyperkit: failed to read stdout: EOF
	I0816 10:55:32.246636    7449 start.go:360] acquireMachinesLock for offline-docker-087000: {Name:mk0510f0432b1e24fd07d899155e85dff8756d5a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:56:25.066462    7449 start.go:364] duration metric: took 52.821401852s to acquireMachinesLock for "offline-docker-087000"
	I0816 10:56:25.066492    7449 start.go:93] Provisioning new machine with config: &{Name:offline-docker-087000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.0 ClusterName:offline-docker-087000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:56:25.066549    7449 start.go:125] createHost starting for "" (driver="hyperkit")
	I0816 10:56:25.088263    7449 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0816 10:56:25.088350    7449 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:56:25.088382    7449 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:56:25.096815    7449 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53860
	I0816 10:56:25.097154    7449 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:56:25.097530    7449 main.go:141] libmachine: Using API Version  1
	I0816 10:56:25.097552    7449 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:56:25.097757    7449 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:56:25.097871    7449 main.go:141] libmachine: (offline-docker-087000) Calling .GetMachineName
	I0816 10:56:25.097958    7449 main.go:141] libmachine: (offline-docker-087000) Calling .DriverName
	I0816 10:56:25.098065    7449 start.go:159] libmachine.API.Create for "offline-docker-087000" (driver="hyperkit")
	I0816 10:56:25.098081    7449 client.go:168] LocalClient.Create starting
	I0816 10:56:25.098107    7449 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem
	I0816 10:56:25.098158    7449 main.go:141] libmachine: Decoding PEM data...
	I0816 10:56:25.098167    7449 main.go:141] libmachine: Parsing certificate...
	I0816 10:56:25.098209    7449 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem
	I0816 10:56:25.098246    7449 main.go:141] libmachine: Decoding PEM data...
	I0816 10:56:25.098258    7449 main.go:141] libmachine: Parsing certificate...
	I0816 10:56:25.098270    7449 main.go:141] libmachine: Running pre-create checks...
	I0816 10:56:25.098275    7449 main.go:141] libmachine: (offline-docker-087000) Calling .PreCreateCheck
	I0816 10:56:25.098361    7449 main.go:141] libmachine: (offline-docker-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:56:25.098398    7449 main.go:141] libmachine: (offline-docker-087000) Calling .GetConfigRaw
	I0816 10:56:25.130046    7449 main.go:141] libmachine: Creating machine...
	I0816 10:56:25.130056    7449 main.go:141] libmachine: (offline-docker-087000) Calling .Create
	I0816 10:56:25.130143    7449 main.go:141] libmachine: (offline-docker-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:56:25.130261    7449 main.go:141] libmachine: (offline-docker-087000) DBG | I0816 10:56:25.130132    7652 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19461-1276/.minikube
	I0816 10:56:25.130306    7449 main.go:141] libmachine: (offline-docker-087000) Downloading /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19461-1276/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0816 10:56:25.354875    7449 main.go:141] libmachine: (offline-docker-087000) DBG | I0816 10:56:25.354789    7652 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/offline-docker-087000/id_rsa...
	I0816 10:56:25.537683    7449 main.go:141] libmachine: (offline-docker-087000) DBG | I0816 10:56:25.537590    7652 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/offline-docker-087000/offline-docker-087000.rawdisk...
	I0816 10:56:25.537708    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Writing magic tar header
	I0816 10:56:25.537719    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Writing SSH key tar header
	I0816 10:56:25.538293    7449 main.go:141] libmachine: (offline-docker-087000) DBG | I0816 10:56:25.538244    7652 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/offline-docker-087000 ...
	I0816 10:56:25.912506    7449 main.go:141] libmachine: (offline-docker-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:56:25.912524    7449 main.go:141] libmachine: (offline-docker-087000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/offline-docker-087000/hyperkit.pid
	I0816 10:56:25.912576    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Using UUID 6ef8b0dc-f5a4-44df-8758-767893559c74
	I0816 10:56:25.939901    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Generated MAC fa:f9:25:f5:87:2a
	I0816 10:56:25.939926    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-087000
	I0816 10:56:25.939970    7449 main.go:141] libmachine: (offline-docker-087000) DBG | 2024/08/16 10:56:25 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/offline-docker-087000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"6ef8b0dc-f5a4-44df-8758-767893559c74", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001141b0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/offline-docker-087000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/offline-docker-087000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/offline-docker-087000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLi
ne:"", process:(*os.Process)(nil)}
	I0816 10:56:25.940001    7449 main.go:141] libmachine: (offline-docker-087000) DBG | 2024/08/16 10:56:25 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/offline-docker-087000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"6ef8b0dc-f5a4-44df-8758-767893559c74", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001141b0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/offline-docker-087000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/offline-docker-087000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/offline-docker-087000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLi
ne:"", process:(*os.Process)(nil)}
	I0816 10:56:25.940068    7449 main.go:141] libmachine: (offline-docker-087000) DBG | 2024/08/16 10:56:25 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/offline-docker-087000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "6ef8b0dc-f5a4-44df-8758-767893559c74", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/offline-docker-087000/offline-docker-087000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/offline-docker-087000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/offline-docker-087000/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/offline-docker-087000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/offline-docker-087000/bzimage,
/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/offline-docker-087000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-087000"}
	I0816 10:56:25.940119    7449 main.go:141] libmachine: (offline-docker-087000) DBG | 2024/08/16 10:56:25 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/offline-docker-087000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 6ef8b0dc-f5a4-44df-8758-767893559c74 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/offline-docker-087000/offline-docker-087000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/offline-docker-087000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/offline-docker-087000/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/offline-docker-087000/console-ring -f kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/offline-docker-087000/bzimage,/Users/jenkins/minikube-integration/19461-1276/.minikube/machi
nes/offline-docker-087000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-087000"
	I0816 10:56:25.940135    7449 main.go:141] libmachine: (offline-docker-087000) DBG | 2024/08/16 10:56:25 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0816 10:56:25.942968    7449 main.go:141] libmachine: (offline-docker-087000) DBG | 2024/08/16 10:56:25 DEBUG: hyperkit: Pid is 7653
	I0816 10:56:25.943458    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Attempt 0
	I0816 10:56:25.943503    7449 main.go:141] libmachine: (offline-docker-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:56:25.943587    7449 main.go:141] libmachine: (offline-docker-087000) DBG | hyperkit pid from json: 7653
	I0816 10:56:25.944512    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Searching for fa:f9:25:f5:87:2a in /var/db/dhcpd_leases ...
	I0816 10:56:25.944539    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:56:25.944550    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:56:25.944564    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:56:25.944575    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:56:25.944604    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:56:25.944626    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:56:25.944635    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:56:25.944650    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:56:25.944660    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:56:25.944673    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:56:25.944686    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:56:25.944701    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:56:25.944712    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:56:25.944724    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:56:25.944735    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:56:25.944746    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:56:25.944755    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:56:25.944767    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:56:25.950914    7449 main.go:141] libmachine: (offline-docker-087000) DBG | 2024/08/16 10:56:25 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0816 10:56:25.959061    7449 main.go:141] libmachine: (offline-docker-087000) DBG | 2024/08/16 10:56:25 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/offline-docker-087000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0816 10:56:25.959980    7449 main.go:141] libmachine: (offline-docker-087000) DBG | 2024/08/16 10:56:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:56:25.960000    7449 main.go:141] libmachine: (offline-docker-087000) DBG | 2024/08/16 10:56:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:56:25.960012    7449 main.go:141] libmachine: (offline-docker-087000) DBG | 2024/08/16 10:56:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:56:25.960030    7449 main.go:141] libmachine: (offline-docker-087000) DBG | 2024/08/16 10:56:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:56:26.337925    7449 main.go:141] libmachine: (offline-docker-087000) DBG | 2024/08/16 10:56:26 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0816 10:56:26.337942    7449 main.go:141] libmachine: (offline-docker-087000) DBG | 2024/08/16 10:56:26 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0816 10:56:26.452422    7449 main.go:141] libmachine: (offline-docker-087000) DBG | 2024/08/16 10:56:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:56:26.452438    7449 main.go:141] libmachine: (offline-docker-087000) DBG | 2024/08/16 10:56:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:56:26.452456    7449 main.go:141] libmachine: (offline-docker-087000) DBG | 2024/08/16 10:56:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:56:26.452472    7449 main.go:141] libmachine: (offline-docker-087000) DBG | 2024/08/16 10:56:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:56:26.453389    7449 main.go:141] libmachine: (offline-docker-087000) DBG | 2024/08/16 10:56:26 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0816 10:56:26.453403    7449 main.go:141] libmachine: (offline-docker-087000) DBG | 2024/08/16 10:56:26 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0816 10:56:27.945024    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Attempt 1
	I0816 10:56:27.945038    7449 main.go:141] libmachine: (offline-docker-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:56:27.945108    7449 main.go:141] libmachine: (offline-docker-087000) DBG | hyperkit pid from json: 7653
	I0816 10:56:27.945931    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Searching for fa:f9:25:f5:87:2a in /var/db/dhcpd_leases ...
	I0816 10:56:27.945962    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:56:27.945969    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:56:27.945976    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:56:27.945984    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:56:27.946023    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:56:27.946038    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:56:27.946047    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:56:27.946052    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:56:27.946062    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:56:27.946071    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:56:27.946078    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:56:27.946089    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:56:27.946098    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:56:27.946107    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:56:27.946115    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:56:27.946122    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:56:27.946138    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:56:27.946146    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:56:29.946586    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Attempt 2
	I0816 10:56:29.946600    7449 main.go:141] libmachine: (offline-docker-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:56:29.946714    7449 main.go:141] libmachine: (offline-docker-087000) DBG | hyperkit pid from json: 7653
	I0816 10:56:29.947650    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Searching for fa:f9:25:f5:87:2a in /var/db/dhcpd_leases ...
	I0816 10:56:29.947702    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:56:29.947712    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:56:29.947721    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:56:29.947735    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:56:29.947757    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:56:29.947770    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:56:29.947778    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:56:29.947787    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:56:29.947796    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:56:29.947808    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:56:29.947817    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:56:29.947825    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:56:29.947831    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:56:29.947839    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:56:29.947846    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:56:29.947854    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:56:29.947862    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:56:29.947868    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:56:31.831273    7449 main.go:141] libmachine: (offline-docker-087000) DBG | 2024/08/16 10:56:31 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0816 10:56:31.831447    7449 main.go:141] libmachine: (offline-docker-087000) DBG | 2024/08/16 10:56:31 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0816 10:56:31.831456    7449 main.go:141] libmachine: (offline-docker-087000) DBG | 2024/08/16 10:56:31 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0816 10:56:31.851457    7449 main.go:141] libmachine: (offline-docker-087000) DBG | 2024/08/16 10:56:31 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0816 10:56:31.949928    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Attempt 3
	I0816 10:56:31.949960    7449 main.go:141] libmachine: (offline-docker-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:56:31.950127    7449 main.go:141] libmachine: (offline-docker-087000) DBG | hyperkit pid from json: 7653
	I0816 10:56:31.951725    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Searching for fa:f9:25:f5:87:2a in /var/db/dhcpd_leases ...
	I0816 10:56:31.951815    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:56:31.951833    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:56:31.951878    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:56:31.951903    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:56:31.951913    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:56:31.951922    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:56:31.951931    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:56:31.951943    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:56:31.951971    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:56:31.951989    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:56:31.952000    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:56:31.952008    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:56:31.952019    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:56:31.952031    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:56:31.952060    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:56:31.952078    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:56:31.952090    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:56:31.952101    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:56:33.952628    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Attempt 4
	I0816 10:56:33.952642    7449 main.go:141] libmachine: (offline-docker-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:56:33.952752    7449 main.go:141] libmachine: (offline-docker-087000) DBG | hyperkit pid from json: 7653
	I0816 10:56:33.953535    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Searching for fa:f9:25:f5:87:2a in /var/db/dhcpd_leases ...
	I0816 10:56:33.953593    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:56:33.953604    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:56:33.953620    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:56:33.953630    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:56:33.953638    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:56:33.953644    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:56:33.953659    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:56:33.953673    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:56:33.953685    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:56:33.953694    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:56:33.953704    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:56:33.953718    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:56:33.953731    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:56:33.953741    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:56:33.953749    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:56:33.953756    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:56:33.953767    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:56:33.953777    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:56:35.955576    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Attempt 5
	I0816 10:56:35.955596    7449 main.go:141] libmachine: (offline-docker-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:56:35.955710    7449 main.go:141] libmachine: (offline-docker-087000) DBG | hyperkit pid from json: 7653
	I0816 10:56:35.956527    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Searching for fa:f9:25:f5:87:2a in /var/db/dhcpd_leases ...
	I0816 10:56:35.956579    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:56:35.956590    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:56:35.956599    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:56:35.956611    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:56:35.956620    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:56:35.956629    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:56:35.956636    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:56:35.956643    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:56:35.956650    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:56:35.956659    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:56:35.956681    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:56:35.956693    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:56:35.956715    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:56:35.956727    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:56:35.956742    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:56:35.956751    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:56:35.956758    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:56:35.956768    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:56:37.957178    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Attempt 6
	I0816 10:56:37.957196    7449 main.go:141] libmachine: (offline-docker-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:56:37.957252    7449 main.go:141] libmachine: (offline-docker-087000) DBG | hyperkit pid from json: 7653
	I0816 10:56:37.958022    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Searching for fa:f9:25:f5:87:2a in /var/db/dhcpd_leases ...
	I0816 10:56:37.958073    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:56:37.958085    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:56:37.958097    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:56:37.958105    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:56:37.958112    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:56:37.958118    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:56:37.958129    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:56:37.958138    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:56:37.958147    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:56:37.958156    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:56:37.958164    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:56:37.958172    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:56:37.958179    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:56:37.958185    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:56:37.958191    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:56:37.958206    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:56:37.958213    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:56:37.958222    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:56:39.958956    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Attempt 7
	I0816 10:56:39.958969    7449 main.go:141] libmachine: (offline-docker-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:56:39.959052    7449 main.go:141] libmachine: (offline-docker-087000) DBG | hyperkit pid from json: 7653
	I0816 10:56:39.959904    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Searching for fa:f9:25:f5:87:2a in /var/db/dhcpd_leases ...
	I0816 10:56:39.959928    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:56:39.959937    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:56:39.959953    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:56:39.959969    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:56:39.959978    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:56:39.959986    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:56:39.959993    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:56:39.959998    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:56:39.960015    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:56:39.960027    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:56:39.960043    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:56:39.960054    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:56:39.960061    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:56:39.960069    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:56:39.960093    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:56:39.960106    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:56:39.960114    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:56:39.960122    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:56:41.962026    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Attempt 8
	I0816 10:56:41.962042    7449 main.go:141] libmachine: (offline-docker-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:56:41.962112    7449 main.go:141] libmachine: (offline-docker-087000) DBG | hyperkit pid from json: 7653
	I0816 10:56:41.962993    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Searching for fa:f9:25:f5:87:2a in /var/db/dhcpd_leases ...
	I0816 10:56:41.963045    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:56:41.963058    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:56:41.963069    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:56:41.963076    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:56:41.963082    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:56:41.963089    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:56:41.963096    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:56:41.963101    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:56:41.963109    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:56:41.963115    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:56:41.963122    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:56:41.963130    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:56:41.963139    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:56:41.963147    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:56:41.963153    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:56:41.963161    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:56:41.963167    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:56:41.963180    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:56:43.965180    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Attempt 9
	I0816 10:56:43.965193    7449 main.go:141] libmachine: (offline-docker-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:56:43.965288    7449 main.go:141] libmachine: (offline-docker-087000) DBG | hyperkit pid from json: 7653
	I0816 10:56:43.966119    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Searching for fa:f9:25:f5:87:2a in /var/db/dhcpd_leases ...
	I0816 10:56:43.966150    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:56:43.966161    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:56:43.966171    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:56:43.966181    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:56:43.966189    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:56:43.966196    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:56:43.966210    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:56:43.966224    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:56:43.966232    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:56:43.966254    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:56:43.966269    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:56:43.966279    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:56:43.966286    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:56:43.966305    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:56:43.966322    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:56:43.966335    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:56:43.966345    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:56:43.966353    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:56:45.966610    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Attempt 10
	I0816 10:56:45.966625    7449 main.go:141] libmachine: (offline-docker-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:56:45.966738    7449 main.go:141] libmachine: (offline-docker-087000) DBG | hyperkit pid from json: 7653
	I0816 10:56:45.967828    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Searching for fa:f9:25:f5:87:2a in /var/db/dhcpd_leases ...
	I0816 10:56:45.967864    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:56:45.967872    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:56:45.967883    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:56:45.967891    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:56:45.967897    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:56:45.967904    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:56:45.967920    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:56:45.967933    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:56:45.967942    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:56:45.967951    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:56:45.967965    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:56:45.967977    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:56:45.967985    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:56:45.967992    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:56:45.967999    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:56:45.968013    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:56:45.968021    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:56:45.968030    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:56:47.969196    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Attempt 11
	I0816 10:56:47.969207    7449 main.go:141] libmachine: (offline-docker-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:56:47.969300    7449 main.go:141] libmachine: (offline-docker-087000) DBG | hyperkit pid from json: 7653
	I0816 10:56:47.970111    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Searching for fa:f9:25:f5:87:2a in /var/db/dhcpd_leases ...
	I0816 10:56:47.970182    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:56:47.970192    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:56:47.970200    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:56:47.970206    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:56:47.970222    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:56:47.970237    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:56:47.970251    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:56:47.970273    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:56:47.970280    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:56:47.970289    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:56:47.970296    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:56:47.970303    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:56:47.970310    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:56:47.970318    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:56:47.970327    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:56:47.970336    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:56:47.970350    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:56:47.970363    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:56:49.970864    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Attempt 12
	I0816 10:56:49.970880    7449 main.go:141] libmachine: (offline-docker-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:56:49.970947    7449 main.go:141] libmachine: (offline-docker-087000) DBG | hyperkit pid from json: 7653
	I0816 10:56:49.971768    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Searching for fa:f9:25:f5:87:2a in /var/db/dhcpd_leases ...
	I0816 10:56:49.971824    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:56:49.971836    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:56:49.971855    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:56:49.971870    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:56:49.971879    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:56:49.971886    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:56:49.971894    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:56:49.971901    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:56:49.971907    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:56:49.971918    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:56:49.971928    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:56:49.971937    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:56:49.971946    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:56:49.971957    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:56:49.971965    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:56:49.971982    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:56:49.971992    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:56:49.972000    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:56:51.973946    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Attempt 13
	I0816 10:56:51.973964    7449 main.go:141] libmachine: (offline-docker-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:56:51.974012    7449 main.go:141] libmachine: (offline-docker-087000) DBG | hyperkit pid from json: 7653
	I0816 10:56:51.974823    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Searching for fa:f9:25:f5:87:2a in /var/db/dhcpd_leases ...
	I0816 10:56:51.974873    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:56:51.974883    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:56:51.974891    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:56:51.974898    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:56:51.974907    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:56:51.974915    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:56:51.974922    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:56:51.974928    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:56:51.974935    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:56:51.974941    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:56:51.974949    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:56:51.974957    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:56:51.974963    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:56:51.974977    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:56:51.974992    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:56:51.975011    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:56:51.975019    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:56:51.975025    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:56:53.976994    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Attempt 14
	I0816 10:56:53.977007    7449 main.go:141] libmachine: (offline-docker-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:56:53.977082    7449 main.go:141] libmachine: (offline-docker-087000) DBG | hyperkit pid from json: 7653
	I0816 10:56:53.977989    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Searching for fa:f9:25:f5:87:2a in /var/db/dhcpd_leases ...
	I0816 10:56:53.978054    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:56:53.978065    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:56:53.978074    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:56:53.978081    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:56:53.978092    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:56:53.978102    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:56:53.978109    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:56:53.978115    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:56:53.978128    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:56:53.978149    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:56:53.978163    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:56:53.978175    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:56:53.978182    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:56:53.978191    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:56:53.978198    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:56:53.978206    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:56:53.978227    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:56:53.978254    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:56:55.979488    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Attempt 15
	I0816 10:56:55.979501    7449 main.go:141] libmachine: (offline-docker-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:56:55.979563    7449 main.go:141] libmachine: (offline-docker-087000) DBG | hyperkit pid from json: 7653
	I0816 10:56:55.980464    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Searching for fa:f9:25:f5:87:2a in /var/db/dhcpd_leases ...
	I0816 10:56:55.980515    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:56:55.980529    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:56:55.980539    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:56:55.980565    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:56:55.980573    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:56:55.980582    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:56:55.980597    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:56:55.980612    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:56:55.980619    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:56:55.980628    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:56:55.980635    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:56:55.980643    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:56:55.980658    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:56:55.980669    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:56:55.980681    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:56:55.980689    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:56:55.980697    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:56:55.980706    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:56:57.982607    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Attempt 16
	I0816 10:56:57.982622    7449 main.go:141] libmachine: (offline-docker-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:56:57.982691    7449 main.go:141] libmachine: (offline-docker-087000) DBG | hyperkit pid from json: 7653
	I0816 10:56:57.983473    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Searching for fa:f9:25:f5:87:2a in /var/db/dhcpd_leases ...
	I0816 10:56:57.983582    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:56:57.983593    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:56:57.983600    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:56:57.983607    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:56:57.983622    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:56:57.983629    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:56:57.983635    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:56:57.983642    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:56:57.983649    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:56:57.983658    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:56:57.983665    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:56:57.983672    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:56:57.983682    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:56:57.983689    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:56:57.983697    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:56:57.983712    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:56:57.983731    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:56:57.983745    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:56:59.985124    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Attempt 17
	I0816 10:56:59.985139    7449 main.go:141] libmachine: (offline-docker-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:56:59.985184    7449 main.go:141] libmachine: (offline-docker-087000) DBG | hyperkit pid from json: 7653
	I0816 10:56:59.986020    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Searching for fa:f9:25:f5:87:2a in /var/db/dhcpd_leases ...
	I0816 10:56:59.986058    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:56:59.986068    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:56:59.986084    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:56:59.986106    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:56:59.986114    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:56:59.986128    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:56:59.986135    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:56:59.986141    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:56:59.986149    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:56:59.986157    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:56:59.986173    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:56:59.986187    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:56:59.986203    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:56:59.986209    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:56:59.986215    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:56:59.986223    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:56:59.986234    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:56:59.986242    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:57:01.986402    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Attempt 18
	I0816 10:57:01.986417    7449 main.go:141] libmachine: (offline-docker-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:57:01.986488    7449 main.go:141] libmachine: (offline-docker-087000) DBG | hyperkit pid from json: 7653
	I0816 10:57:01.987533    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Searching for fa:f9:25:f5:87:2a in /var/db/dhcpd_leases ...
	I0816 10:57:01.987591    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:57:01.987601    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:57:01.987608    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:57:01.987615    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:57:01.987628    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:57:01.987635    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:57:01.987642    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:57:01.987649    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:57:01.987670    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:57:01.987694    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:57:01.987703    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:57:01.987714    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:57:01.987723    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:57:01.987731    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:57:01.987739    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:57:01.987746    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:57:01.987753    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:57:01.987771    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:57:03.989683    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Attempt 19
	I0816 10:57:03.989698    7449 main.go:141] libmachine: (offline-docker-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:57:03.989764    7449 main.go:141] libmachine: (offline-docker-087000) DBG | hyperkit pid from json: 7653
	I0816 10:57:03.990588    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Searching for fa:f9:25:f5:87:2a in /var/db/dhcpd_leases ...
	I0816 10:57:03.990641    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:57:03.990650    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:57:03.990661    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:57:03.990668    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:57:03.990680    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:57:03.990691    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:57:03.990698    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:57:03.990706    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:57:03.990713    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:57:03.990720    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:57:03.990729    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:57:03.990735    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:57:03.990746    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:57:03.990756    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:57:03.990765    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:57:03.990772    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:57:03.990779    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:57:03.990797    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:57:05.992764    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Attempt 20
	I0816 10:57:05.992776    7449 main.go:141] libmachine: (offline-docker-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:57:05.992847    7449 main.go:141] libmachine: (offline-docker-087000) DBG | hyperkit pid from json: 7653
	I0816 10:57:05.993673    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Searching for fa:f9:25:f5:87:2a in /var/db/dhcpd_leases ...
	I0816 10:57:05.993714    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:57:05.993726    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:57:05.993751    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:57:05.993761    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:57:05.993771    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:57:05.993790    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:57:05.993806    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:57:05.993818    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:57:05.993826    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:57:05.993834    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:57:05.993841    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:57:05.993849    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:57:05.993866    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:57:05.993880    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:57:05.993893    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:57:05.993900    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:57:05.993915    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:57:05.993930    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:57:07.995820    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Attempt 21
	I0816 10:57:07.995831    7449 main.go:141] libmachine: (offline-docker-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:57:07.995905    7449 main.go:141] libmachine: (offline-docker-087000) DBG | hyperkit pid from json: 7653
	I0816 10:57:07.996678    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Searching for fa:f9:25:f5:87:2a in /var/db/dhcpd_leases ...
	I0816 10:57:07.996731    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:57:07.996741    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:57:07.996759    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:57:07.996768    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:57:07.996774    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:57:07.996781    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:57:07.996798    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:57:07.996811    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:57:07.996823    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:57:07.996831    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:57:07.996839    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:57:07.996847    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:57:07.996859    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:57:07.996868    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:57:07.996882    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:57:07.996894    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:57:07.996901    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:57:07.996913    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:57:09.998092    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Attempt 22
	I0816 10:57:09.998104    7449 main.go:141] libmachine: (offline-docker-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:57:09.998185    7449 main.go:141] libmachine: (offline-docker-087000) DBG | hyperkit pid from json: 7653
	I0816 10:57:09.998971    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Searching for fa:f9:25:f5:87:2a in /var/db/dhcpd_leases ...
	I0816 10:57:09.999011    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:57:09.999021    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:57:09.999031    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:57:09.999039    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:57:09.999045    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:57:09.999056    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:57:09.999064    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:57:09.999071    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:57:09.999085    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:57:09.999098    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:57:09.999108    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:57:09.999116    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:57:09.999123    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:57:09.999131    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:57:09.999139    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:57:09.999146    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:57:09.999153    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:57:09.999161    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:57:12.001171    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Attempt 23
	I0816 10:57:12.001183    7449 main.go:141] libmachine: (offline-docker-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:57:12.001294    7449 main.go:141] libmachine: (offline-docker-087000) DBG | hyperkit pid from json: 7653
	I0816 10:57:12.002067    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Searching for fa:f9:25:f5:87:2a in /var/db/dhcpd_leases ...
	I0816 10:57:12.002121    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:57:12.002131    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:57:12.002147    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:57:12.002159    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:57:12.002168    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:57:12.002174    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:57:12.002182    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:57:12.002187    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:57:12.002201    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:57:12.002213    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:57:12.002223    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:57:12.002244    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:57:12.002253    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:57:12.002261    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:57:12.002268    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:57:12.002276    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:57:12.002285    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:57:12.002292    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:57:14.004239    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Attempt 24
	I0816 10:57:14.004255    7449 main.go:141] libmachine: (offline-docker-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:57:14.004310    7449 main.go:141] libmachine: (offline-docker-087000) DBG | hyperkit pid from json: 7653
	I0816 10:57:14.005093    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Searching for fa:f9:25:f5:87:2a in /var/db/dhcpd_leases ...
	I0816 10:57:14.005142    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:57:14.005159    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:57:14.005168    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:57:14.005175    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:57:14.005187    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:57:14.005198    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:57:14.005208    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:57:14.005219    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:57:14.005228    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:57:14.005235    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:57:14.005241    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:57:14.005248    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:57:14.005257    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:57:14.005273    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:57:14.005287    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:57:14.005317    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:57:14.005331    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:57:14.005341    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:57:16.006387    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Attempt 25
	I0816 10:57:16.006402    7449 main.go:141] libmachine: (offline-docker-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:57:16.006502    7449 main.go:141] libmachine: (offline-docker-087000) DBG | hyperkit pid from json: 7653
	I0816 10:57:16.007287    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Searching for fa:f9:25:f5:87:2a in /var/db/dhcpd_leases ...
	I0816 10:57:16.007342    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:57:16.007352    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:57:16.007364    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:57:16.007371    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:57:16.007379    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:57:16.007387    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:57:16.007405    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:57:16.007418    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:57:16.007431    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:57:16.007438    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:57:16.007445    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:57:16.007459    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:57:16.007471    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:57:16.007478    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:57:16.007486    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:57:16.007494    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:57:16.007500    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:57:16.007507    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:57:18.009472    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Attempt 26
	I0816 10:57:18.009485    7449 main.go:141] libmachine: (offline-docker-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:57:18.009525    7449 main.go:141] libmachine: (offline-docker-087000) DBG | hyperkit pid from json: 7653
	I0816 10:57:18.010680    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Searching for fa:f9:25:f5:87:2a in /var/db/dhcpd_leases ...
	I0816 10:57:18.010732    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:57:18.010743    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:57:18.010753    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:57:18.010759    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:57:18.010777    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:57:18.010785    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:57:18.010792    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:57:18.010799    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:57:18.010809    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:57:18.010824    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:57:18.010836    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:57:18.010846    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:57:18.010853    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:57:18.010859    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:57:18.010866    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:57:18.010873    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:57:18.010879    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:57:18.010887    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:57:20.012188    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Attempt 27
	I0816 10:57:20.012204    7449 main.go:141] libmachine: (offline-docker-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:57:20.012253    7449 main.go:141] libmachine: (offline-docker-087000) DBG | hyperkit pid from json: 7653
	I0816 10:57:20.013017    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Searching for fa:f9:25:f5:87:2a in /var/db/dhcpd_leases ...
	I0816 10:57:20.013078    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:57:20.013099    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:57:20.013108    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:57:20.013114    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:57:20.013122    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:57:20.013128    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:57:20.013135    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:57:20.013143    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:57:20.013151    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:57:20.013157    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:57:20.013183    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:57:20.013191    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:57:20.013207    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:57:20.013219    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:57:20.013227    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:57:20.013234    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:57:20.013249    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:57:20.013261    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:57:22.013333    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Attempt 28
	I0816 10:57:22.013346    7449 main.go:141] libmachine: (offline-docker-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:57:22.013433    7449 main.go:141] libmachine: (offline-docker-087000) DBG | hyperkit pid from json: 7653
	I0816 10:57:22.014230    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Searching for fa:f9:25:f5:87:2a in /var/db/dhcpd_leases ...
	I0816 10:57:22.014287    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:57:22.014297    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:57:22.014307    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:57:22.014319    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:57:22.014327    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:57:22.014333    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:57:22.014341    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:57:22.014350    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:57:22.014357    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:57:22.014376    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:57:22.014384    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:57:22.014398    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:57:22.014405    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:57:22.014413    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:57:22.014420    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:57:22.014428    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:57:22.014442    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:57:22.014454    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:57:24.015873    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Attempt 29
	I0816 10:57:24.015890    7449 main.go:141] libmachine: (offline-docker-087000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:57:24.015944    7449 main.go:141] libmachine: (offline-docker-087000) DBG | hyperkit pid from json: 7653
	I0816 10:57:24.016756    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Searching for fa:f9:25:f5:87:2a in /var/db/dhcpd_leases ...
	I0816 10:57:24.016798    7449 main.go:141] libmachine: (offline-docker-087000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:57:24.016806    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:57:24.016826    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:57:24.016837    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:57:24.016853    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:57:24.016868    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:57:24.016885    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:57:24.016904    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:57:24.016914    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:57:24.016927    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:57:24.016943    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:57:24.016960    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:57:24.016969    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:57:24.016976    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:57:24.016984    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:57:24.016991    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:57:24.017000    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:57:24.017009    7449 main.go:141] libmachine: (offline-docker-087000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:57:26.018923    7449 client.go:171] duration metric: took 1m0.922695951s to LocalClient.Create
	I0816 10:57:28.020962    7449 start.go:128] duration metric: took 1m2.956331599s to createHost
	I0816 10:57:28.020986    7449 start.go:83] releasing machines lock for "offline-docker-087000", held for 1m2.956438499s
	W0816 10:57:28.021085    7449 out.go:270] * Failed to start hyperkit VM. Running "minikube delete -p offline-docker-087000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for fa:f9:25:f5:87:2a
	* Failed to start hyperkit VM. Running "minikube delete -p offline-docker-087000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for fa:f9:25:f5:87:2a
	I0816 10:57:28.084326    7449 out.go:201] 
	W0816 10:57:28.105402    7449 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for fa:f9:25:f5:87:2a
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for fa:f9:25:f5:87:2a
	W0816 10:57:28.105420    7449 out.go:270] * 
	* 
	W0816 10:57:28.106103    7449 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 10:57:28.168345    7449 out.go:201] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-amd64 start -p offline-docker-087000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-08-16 10:57:28.280306 -0700 PDT m=+4190.934021996
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-087000 -n offline-docker-087000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-087000 -n offline-docker-087000: exit status 7 (82.58405ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 10:57:28.360837    7674 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0816 10:57:28.360857    7674 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-087000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "offline-docker-087000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-087000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-087000: (5.269404178s)
--- FAIL: TestOffline (195.54s)

                                                
                                    
x
+
TestCertOptions (251.86s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-700000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit 
E0816 11:02:55.856111    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/addons-725000/client.crt: no such file or directory" logger="UnhandledError"
E0816 11:04:01.889688    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/skaffold-359000/client.crt: no such file or directory" logger="UnhandledError"
E0816 11:04:29.604821    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/skaffold-359000/client.crt: no such file or directory" logger="UnhandledError"
E0816 11:05:35.676855    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/functional-373000/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p cert-options-700000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit : exit status 80 (4m6.171308974s)

                                                
                                                
-- stdout --
	* [cert-options-700000] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19461
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1276/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1276/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "cert-options-700000" primary control-plane node in "cert-options-700000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "cert-options-700000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for e:3b:57:b6:df:10
	* Failed to start hyperkit VM. Running "minikube delete -p cert-options-700000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 2e:9e:6d:c2:59:dc
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 2e:9e:6d:c2:59:dc
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-amd64 start -p cert-options-700000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-700000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p cert-options-700000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 50 (169.530122ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node cert-options-700000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-amd64 -p cert-options-700000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 50
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-700000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-700000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p cert-options-700000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 50 (163.175036ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node cert-options-700000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-amd64 ssh -p cert-options-700000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 50
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node cert-options-700000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-08-16 11:06:55.671751 -0700 PDT m=+4758.270456184
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p cert-options-700000 -n cert-options-700000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p cert-options-700000 -n cert-options-700000: exit status 7 (78.04411ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 11:06:55.748155    8184 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0816 11:06:55.748176    8184 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-700000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "cert-options-700000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-700000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-700000: (5.234811211s)
--- FAIL: TestCertOptions (251.86s)

                                                
                                    
x
+
TestCertExpiration (1702.81s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-236000 --memory=2048 --cert-expiration=3m --driver=hyperkit 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p cert-expiration-236000 --memory=2048 --cert-expiration=3m --driver=hyperkit : exit status 80 (4m6.544130749s)

                                                
                                                
-- stdout --
	* [cert-expiration-236000] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19461
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1276/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1276/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "cert-expiration-236000" primary control-plane node in "cert-expiration-236000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "cert-expiration-236000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for c2:74:18:5c:7e:aa
	* Failed to start hyperkit VM. Running "minikube delete -p cert-expiration-236000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for d2:e3:d8:34:59:12
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for d2:e3:d8:34:59:12
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-amd64 start -p cert-expiration-236000 --memory=2048 --cert-expiration=3m --driver=hyperkit " : exit status 80
E0816 11:06:32.763664    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/addons-725000/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-236000 --memory=2048 --cert-expiration=8760h --driver=hyperkit 
E0816 11:09:01.883363    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/skaffold-359000/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p cert-expiration-236000 --memory=2048 --cert-expiration=8760h --driver=hyperkit : exit status 80 (21m10.911957164s)

                                                
                                                
-- stdout --
	* [cert-expiration-236000] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19461
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1276/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1276/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting "cert-expiration-236000" primary control-plane node in "cert-expiration-236000" cluster
	* Updating the running hyperkit "cert-expiration-236000" VM ...
	* Updating the running hyperkit "cert-expiration-236000" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	* Failed to start hyperkit VM. Running "minikube delete -p cert-expiration-236000" may fix it: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-amd64 start -p cert-expiration-236000 --memory=2048 --cert-expiration=8760h --driver=hyperkit " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-236000] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19461
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1276/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1276/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting "cert-expiration-236000" primary control-plane node in "cert-expiration-236000" cluster
	* Updating the running hyperkit "cert-expiration-236000" VM ...
	* Updating the running hyperkit "cert-expiration-236000" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	* Failed to start hyperkit VM. Running "minikube delete -p cert-expiration-236000" may fix it: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-08-16 11:30:03.296811 -0700 PDT m=+6145.867932772
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p cert-expiration-236000 -n cert-expiration-236000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p cert-expiration-236000 -n cert-expiration-236000: exit status 7 (79.385121ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 11:30:03.374277    9917 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0816 11:30:03.374298    9917 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-236000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "cert-expiration-236000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-236000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-expiration-236000: (5.269092713s)
--- FAIL: TestCertExpiration (1702.81s)

                                                
                                    
x
+
TestDockerFlags (252.26s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-585000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit 
E0816 10:59:01.827050    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/skaffold-359000/client.crt: no such file or directory" logger="UnhandledError"
E0816 10:59:01.834466    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/skaffold-359000/client.crt: no such file or directory" logger="UnhandledError"
E0816 10:59:01.847974    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/skaffold-359000/client.crt: no such file or directory" logger="UnhandledError"
E0816 10:59:01.870410    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/skaffold-359000/client.crt: no such file or directory" logger="UnhandledError"
E0816 10:59:01.913635    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/skaffold-359000/client.crt: no such file or directory" logger="UnhandledError"
E0816 10:59:01.995172    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/skaffold-359000/client.crt: no such file or directory" logger="UnhandledError"
E0816 10:59:02.157448    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/skaffold-359000/client.crt: no such file or directory" logger="UnhandledError"
E0816 10:59:02.480882    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/skaffold-359000/client.crt: no such file or directory" logger="UnhandledError"
E0816 10:59:03.123005    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/skaffold-359000/client.crt: no such file or directory" logger="UnhandledError"
E0816 10:59:04.406420    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/skaffold-359000/client.crt: no such file or directory" logger="UnhandledError"
E0816 10:59:06.968427    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/skaffold-359000/client.crt: no such file or directory" logger="UnhandledError"
E0816 10:59:12.091735    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/skaffold-359000/client.crt: no such file or directory" logger="UnhandledError"
E0816 10:59:22.334914    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/skaffold-359000/client.crt: no such file or directory" logger="UnhandledError"
E0816 10:59:42.816054    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/skaffold-359000/client.crt: no such file or directory" logger="UnhandledError"
E0816 11:00:23.776231    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/skaffold-359000/client.crt: no such file or directory" logger="UnhandledError"
E0816 11:00:35.614681    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/functional-373000/client.crt: no such file or directory" logger="UnhandledError"
E0816 11:01:32.701764    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/addons-725000/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p docker-flags-585000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit : exit status 80 (4m6.517249462s)

                                                
                                                
-- stdout --
	* [docker-flags-585000] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19461
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1276/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1276/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "docker-flags-585000" primary control-plane node in "docker-flags-585000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "docker-flags-585000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 10:58:36.866417    7719 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:58:36.866688    7719 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:58:36.866694    7719 out.go:358] Setting ErrFile to fd 2...
	I0816 10:58:36.866697    7719 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:58:36.866876    7719 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1276/.minikube/bin
	I0816 10:58:36.868374    7719 out.go:352] Setting JSON to false
	I0816 10:58:36.890907    7719 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":5286,"bootTime":1723825830,"procs":439,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0816 10:58:36.891004    7719 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 10:58:36.912748    7719 out.go:177] * [docker-flags-585000] minikube v1.33.1 on Darwin 14.6.1
	I0816 10:58:36.956505    7719 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 10:58:36.956509    7719 notify.go:220] Checking for updates...
	I0816 10:58:36.999365    7719 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1276/kubeconfig
	I0816 10:58:37.020411    7719 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0816 10:58:37.041225    7719 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 10:58:37.062393    7719 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1276/.minikube
	I0816 10:58:37.083366    7719 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 10:58:37.104663    7719 config.go:182] Loaded profile config "force-systemd-flag-576000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:58:37.104747    7719 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 10:58:37.133467    7719 out.go:177] * Using the hyperkit driver based on user configuration
	I0816 10:58:37.175192    7719 start.go:297] selected driver: hyperkit
	I0816 10:58:37.175207    7719 start.go:901] validating driver "hyperkit" against <nil>
	I0816 10:58:37.175221    7719 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 10:58:37.178197    7719 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:58:37.178319    7719 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19461-1276/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0816 10:58:37.186769    7719 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0816 10:58:37.190726    7719 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:58:37.190746    7719 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0816 10:58:37.190785    7719 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 10:58:37.190983    7719 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0816 10:58:37.191013    7719 cni.go:84] Creating CNI manager for ""
	I0816 10:58:37.191028    7719 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0816 10:58:37.191032    7719 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0816 10:58:37.191095    7719 start.go:340] cluster config:
	{Name:docker-flags-585000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:docker-flags-585000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:
[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientP
ath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 10:58:37.191197    7719 iso.go:125] acquiring lock: {Name:mkb430b43b5e7a8a683454482519837ed996c78d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:58:37.212502    7719 out.go:177] * Starting "docker-flags-585000" primary control-plane node in "docker-flags-585000" cluster
	I0816 10:58:37.254216    7719 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 10:58:37.254253    7719 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0816 10:58:37.254271    7719 cache.go:56] Caching tarball of preloaded images
	I0816 10:58:37.254395    7719 preload.go:172] Found /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0816 10:58:37.254404    7719 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 10:58:37.254493    7719 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/docker-flags-585000/config.json ...
	I0816 10:58:37.254510    7719 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/docker-flags-585000/config.json: {Name:mk102a6e02c5691038e1f75767f8f28e37656946 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:58:37.254825    7719 start.go:360] acquireMachinesLock for docker-flags-585000: {Name:mk0510f0432b1e24fd07d899155e85dff8756d5a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:59:34.189654    7719 start.go:364] duration metric: took 56.936560056s to acquireMachinesLock for "docker-flags-585000"
	I0816 10:59:34.189693    7719 start.go:93] Provisioning new machine with config: &{Name:docker-flags-585000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSH
Key: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:docker-flags-585000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:59:34.189747    7719 start.go:125] createHost starting for "" (driver="hyperkit")
	I0816 10:59:34.211426    7719 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0816 10:59:34.211555    7719 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:59:34.211590    7719 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:59:34.220345    7719 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53894
	I0816 10:59:34.220757    7719 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:59:34.221336    7719 main.go:141] libmachine: Using API Version  1
	I0816 10:59:34.221348    7719 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:59:34.221633    7719 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:59:34.221742    7719 main.go:141] libmachine: (docker-flags-585000) Calling .GetMachineName
	I0816 10:59:34.221844    7719 main.go:141] libmachine: (docker-flags-585000) Calling .DriverName
	I0816 10:59:34.221958    7719 start.go:159] libmachine.API.Create for "docker-flags-585000" (driver="hyperkit")
	I0816 10:59:34.221981    7719 client.go:168] LocalClient.Create starting
	I0816 10:59:34.222023    7719 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem
	I0816 10:59:34.222075    7719 main.go:141] libmachine: Decoding PEM data...
	I0816 10:59:34.222090    7719 main.go:141] libmachine: Parsing certificate...
	I0816 10:59:34.222144    7719 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem
	I0816 10:59:34.222181    7719 main.go:141] libmachine: Decoding PEM data...
	I0816 10:59:34.222193    7719 main.go:141] libmachine: Parsing certificate...
	I0816 10:59:34.222206    7719 main.go:141] libmachine: Running pre-create checks...
	I0816 10:59:34.222216    7719 main.go:141] libmachine: (docker-flags-585000) Calling .PreCreateCheck
	I0816 10:59:34.222291    7719 main.go:141] libmachine: (docker-flags-585000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:59:34.222434    7719 main.go:141] libmachine: (docker-flags-585000) Calling .GetConfigRaw
	I0816 10:59:34.275055    7719 main.go:141] libmachine: Creating machine...
	I0816 10:59:34.275078    7719 main.go:141] libmachine: (docker-flags-585000) Calling .Create
	I0816 10:59:34.275169    7719 main.go:141] libmachine: (docker-flags-585000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:59:34.275329    7719 main.go:141] libmachine: (docker-flags-585000) DBG | I0816 10:59:34.275164    7742 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19461-1276/.minikube
	I0816 10:59:34.275382    7719 main.go:141] libmachine: (docker-flags-585000) Downloading /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19461-1276/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0816 10:59:34.458994    7719 main.go:141] libmachine: (docker-flags-585000) DBG | I0816 10:59:34.458885    7742 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/docker-flags-585000/id_rsa...
	I0816 10:59:34.567351    7719 main.go:141] libmachine: (docker-flags-585000) DBG | I0816 10:59:34.567267    7742 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/docker-flags-585000/docker-flags-585000.rawdisk...
	I0816 10:59:34.567368    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Writing magic tar header
	I0816 10:59:34.567379    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Writing SSH key tar header
	I0816 10:59:34.567963    7719 main.go:141] libmachine: (docker-flags-585000) DBG | I0816 10:59:34.567923    7742 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/docker-flags-585000 ...
	I0816 10:59:34.943321    7719 main.go:141] libmachine: (docker-flags-585000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:59:34.943341    7719 main.go:141] libmachine: (docker-flags-585000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/docker-flags-585000/hyperkit.pid
	I0816 10:59:34.943351    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Using UUID 6bbfd076-2f87-4b6b-98e3-841e9a620a36
	I0816 10:59:34.968180    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Generated MAC ae:0:ea:d7:22:bf
	I0816 10:59:34.968201    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-585000
	I0816 10:59:34.968244    7719 main.go:141] libmachine: (docker-flags-585000) DBG | 2024/08/16 10:59:34 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/docker-flags-585000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"6bbfd076-2f87-4b6b-98e3-841e9a620a36", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001b0630)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/docker-flags-585000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/docker-flags-585000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/docker-flags-585000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pro
cess:(*os.Process)(nil)}
	I0816 10:59:34.968273    7719 main.go:141] libmachine: (docker-flags-585000) DBG | 2024/08/16 10:59:34 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/docker-flags-585000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"6bbfd076-2f87-4b6b-98e3-841e9a620a36", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001b0630)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/docker-flags-585000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/docker-flags-585000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/docker-flags-585000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pro
cess:(*os.Process)(nil)}
	I0816 10:59:34.968330    7719 main.go:141] libmachine: (docker-flags-585000) DBG | 2024/08/16 10:59:34 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/docker-flags-585000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "6bbfd076-2f87-4b6b-98e3-841e9a620a36", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/docker-flags-585000/docker-flags-585000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/docker-flags-585000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/docker-flags-585000/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/docker-flags-585000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/docker-flags-585000/bzimage,/Users/jenkins/m
inikube-integration/19461-1276/.minikube/machines/docker-flags-585000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-585000"}
	I0816 10:59:34.968385    7719 main.go:141] libmachine: (docker-flags-585000) DBG | 2024/08/16 10:59:34 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/docker-flags-585000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 6bbfd076-2f87-4b6b-98e3-841e9a620a36 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/docker-flags-585000/docker-flags-585000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/docker-flags-585000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/docker-flags-585000/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/docker-flags-585000/console-ring -f kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/docker-flags-585000/bzimage,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/docker-flags
-585000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-585000"
	I0816 10:59:34.968430    7719 main.go:141] libmachine: (docker-flags-585000) DBG | 2024/08/16 10:59:34 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0816 10:59:34.971365    7719 main.go:141] libmachine: (docker-flags-585000) DBG | 2024/08/16 10:59:34 DEBUG: hyperkit: Pid is 7743
	I0816 10:59:34.971716    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Attempt 0
	I0816 10:59:34.971737    7719 main.go:141] libmachine: (docker-flags-585000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:59:34.971869    7719 main.go:141] libmachine: (docker-flags-585000) DBG | hyperkit pid from json: 7743
	I0816 10:59:34.972778    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Searching for ae:0:ea:d7:22:bf in /var/db/dhcpd_leases ...
	I0816 10:59:34.972832    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:59:34.972850    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:59:34.972871    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:59:34.972883    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:59:34.972895    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:59:34.972918    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:59:34.972926    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:59:34.972938    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:59:34.972950    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:59:34.972965    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:59:34.972975    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:59:34.972994    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:59:34.973010    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:59:34.973047    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:59:34.973064    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:59:34.973078    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:59:34.973087    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:59:34.973097    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:59:34.978876    7719 main.go:141] libmachine: (docker-flags-585000) DBG | 2024/08/16 10:59:34 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0816 10:59:34.986807    7719 main.go:141] libmachine: (docker-flags-585000) DBG | 2024/08/16 10:59:34 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/docker-flags-585000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0816 10:59:34.987694    7719 main.go:141] libmachine: (docker-flags-585000) DBG | 2024/08/16 10:59:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:59:34.987727    7719 main.go:141] libmachine: (docker-flags-585000) DBG | 2024/08/16 10:59:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:59:34.987740    7719 main.go:141] libmachine: (docker-flags-585000) DBG | 2024/08/16 10:59:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:59:34.987751    7719 main.go:141] libmachine: (docker-flags-585000) DBG | 2024/08/16 10:59:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:59:35.359602    7719 main.go:141] libmachine: (docker-flags-585000) DBG | 2024/08/16 10:59:35 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0816 10:59:35.359625    7719 main.go:141] libmachine: (docker-flags-585000) DBG | 2024/08/16 10:59:35 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0816 10:59:35.474455    7719 main.go:141] libmachine: (docker-flags-585000) DBG | 2024/08/16 10:59:35 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:59:35.474470    7719 main.go:141] libmachine: (docker-flags-585000) DBG | 2024/08/16 10:59:35 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:59:35.474484    7719 main.go:141] libmachine: (docker-flags-585000) DBG | 2024/08/16 10:59:35 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:59:35.474511    7719 main.go:141] libmachine: (docker-flags-585000) DBG | 2024/08/16 10:59:35 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:59:35.475450    7719 main.go:141] libmachine: (docker-flags-585000) DBG | 2024/08/16 10:59:35 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0816 10:59:35.475470    7719 main.go:141] libmachine: (docker-flags-585000) DBG | 2024/08/16 10:59:35 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0816 10:59:36.974049    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Attempt 1
	I0816 10:59:36.974068    7719 main.go:141] libmachine: (docker-flags-585000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:59:36.974103    7719 main.go:141] libmachine: (docker-flags-585000) DBG | hyperkit pid from json: 7743
	I0816 10:59:36.974890    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Searching for ae:0:ea:d7:22:bf in /var/db/dhcpd_leases ...
	I0816 10:59:36.974942    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:59:36.974959    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:59:36.974968    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:59:36.974975    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:59:36.974982    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:59:36.974989    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:59:36.975008    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:59:36.975030    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:59:36.975046    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:59:36.975058    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:59:36.975066    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:59:36.975074    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:59:36.975082    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:59:36.975090    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:59:36.975119    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:59:36.975131    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:59:36.975140    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:59:36.975146    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:59:38.975853    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Attempt 2
	I0816 10:59:38.975869    7719 main.go:141] libmachine: (docker-flags-585000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:59:38.975965    7719 main.go:141] libmachine: (docker-flags-585000) DBG | hyperkit pid from json: 7743
	I0816 10:59:38.976791    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Searching for ae:0:ea:d7:22:bf in /var/db/dhcpd_leases ...
	I0816 10:59:38.976806    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:59:38.976812    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:59:38.976820    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:59:38.976826    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:59:38.976847    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:59:38.976857    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:59:38.976865    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:59:38.976871    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:59:38.976877    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:59:38.976884    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:59:38.976889    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:59:38.976896    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:59:38.976904    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:59:38.976913    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:59:38.976921    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:59:38.976928    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:59:38.976938    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:59:38.976946    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:59:40.855212    7719 main.go:141] libmachine: (docker-flags-585000) DBG | 2024/08/16 10:59:40 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0816 10:59:40.855422    7719 main.go:141] libmachine: (docker-flags-585000) DBG | 2024/08/16 10:59:40 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0816 10:59:40.855441    7719 main.go:141] libmachine: (docker-flags-585000) DBG | 2024/08/16 10:59:40 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0816 10:59:40.876095    7719 main.go:141] libmachine: (docker-flags-585000) DBG | 2024/08/16 10:59:40 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0816 10:59:40.979071    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Attempt 3
	I0816 10:59:40.979098    7719 main.go:141] libmachine: (docker-flags-585000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:59:40.979334    7719 main.go:141] libmachine: (docker-flags-585000) DBG | hyperkit pid from json: 7743
	I0816 10:59:40.980773    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Searching for ae:0:ea:d7:22:bf in /var/db/dhcpd_leases ...
	I0816 10:59:40.980919    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:59:40.980940    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:59:40.980955    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:59:40.980986    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:59:40.980997    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:59:40.981007    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:59:40.981025    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:59:40.981036    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:59:40.981084    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:59:40.981102    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:59:40.981112    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:59:40.981123    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:59:40.981145    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:59:40.981173    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:59:40.981193    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:59:40.981210    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:59:40.981221    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:59:40.981233    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:59:42.982173    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Attempt 4
	I0816 10:59:42.982187    7719 main.go:141] libmachine: (docker-flags-585000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:59:42.982281    7719 main.go:141] libmachine: (docker-flags-585000) DBG | hyperkit pid from json: 7743
	I0816 10:59:42.983082    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Searching for ae:0:ea:d7:22:bf in /var/db/dhcpd_leases ...
	I0816 10:59:42.983117    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:59:42.983125    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:59:42.983133    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:59:42.983139    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:59:42.983159    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:59:42.983166    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:59:42.983173    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:59:42.983183    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:59:42.983198    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:59:42.983210    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:59:42.983218    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:59:42.983226    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:59:42.983233    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:59:42.983239    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:59:42.983248    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:59:42.983256    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:59:42.983265    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:59:42.983273    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:59:44.985218    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Attempt 5
	I0816 10:59:44.985232    7719 main.go:141] libmachine: (docker-flags-585000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:59:44.985334    7719 main.go:141] libmachine: (docker-flags-585000) DBG | hyperkit pid from json: 7743
	I0816 10:59:44.986113    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Searching for ae:0:ea:d7:22:bf in /var/db/dhcpd_leases ...
	I0816 10:59:44.986173    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:59:44.986188    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:59:44.986198    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:59:44.986208    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:59:44.986214    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:59:44.986233    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:59:44.986259    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:59:44.986285    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:59:44.986300    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:59:44.986309    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:59:44.986317    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:59:44.986324    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:59:44.986331    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:59:44.986338    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:59:44.986346    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:59:44.986352    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:59:44.986360    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:59:44.986377    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:59:46.988293    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Attempt 6
	I0816 10:59:46.988305    7719 main.go:141] libmachine: (docker-flags-585000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:59:46.988366    7719 main.go:141] libmachine: (docker-flags-585000) DBG | hyperkit pid from json: 7743
	I0816 10:59:46.989173    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Searching for ae:0:ea:d7:22:bf in /var/db/dhcpd_leases ...
	I0816 10:59:46.989219    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:59:46.989238    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:59:46.989245    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:59:46.989255    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:59:46.989263    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:59:46.989271    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:59:46.989280    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:59:46.989292    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:59:46.989302    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:59:46.989310    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:59:46.989319    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:59:46.989326    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:59:46.989334    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:59:46.989341    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:59:46.989347    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:59:46.989354    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:59:46.989361    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:59:46.989369    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:59:48.991312    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Attempt 7
	I0816 10:59:48.991324    7719 main.go:141] libmachine: (docker-flags-585000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:59:48.991367    7719 main.go:141] libmachine: (docker-flags-585000) DBG | hyperkit pid from json: 7743
	I0816 10:59:48.992150    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Searching for ae:0:ea:d7:22:bf in /var/db/dhcpd_leases ...
	I0816 10:59:48.992200    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:59:48.992217    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:59:48.992229    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:59:48.992240    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:59:48.992247    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:59:48.992261    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:59:48.992274    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:59:48.992287    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:59:48.992295    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:59:48.992302    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:59:48.992310    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:59:48.992317    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:59:48.992325    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:59:48.992339    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:59:48.992352    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:59:48.992360    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:59:48.992368    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:59:48.992377    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:59:50.994323    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Attempt 8
	I0816 10:59:50.994334    7719 main.go:141] libmachine: (docker-flags-585000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:59:50.994437    7719 main.go:141] libmachine: (docker-flags-585000) DBG | hyperkit pid from json: 7743
	I0816 10:59:50.995352    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Searching for ae:0:ea:d7:22:bf in /var/db/dhcpd_leases ...
	I0816 10:59:50.995391    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:59:50.995408    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:59:50.995439    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:59:50.995451    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:59:50.995461    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:59:50.995470    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:59:50.995487    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:59:50.995498    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:59:50.995515    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:59:50.995523    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:59:50.995530    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:59:50.995537    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:59:50.995543    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:59:50.995551    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:59:50.995560    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:59:50.995565    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:59:50.995571    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:59:50.995579    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:59:52.997508    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Attempt 9
	I0816 10:59:52.997526    7719 main.go:141] libmachine: (docker-flags-585000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:59:52.997580    7719 main.go:141] libmachine: (docker-flags-585000) DBG | hyperkit pid from json: 7743
	I0816 10:59:52.998370    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Searching for ae:0:ea:d7:22:bf in /var/db/dhcpd_leases ...
	I0816 10:59:52.998410    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:59:52.998422    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:59:52.998431    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:59:52.998438    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:59:52.998454    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:59:52.998467    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:59:52.998480    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:59:52.998493    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:59:52.998501    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:59:52.998507    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:59:52.998525    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:59:52.998538    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:59:52.998553    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:59:52.998563    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:59:52.998571    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:59:52.998578    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:59:52.998584    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:59:52.998590    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:59:55.000574    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Attempt 10
	I0816 10:59:55.000591    7719 main.go:141] libmachine: (docker-flags-585000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:59:55.000692    7719 main.go:141] libmachine: (docker-flags-585000) DBG | hyperkit pid from json: 7743
	I0816 10:59:55.001454    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Searching for ae:0:ea:d7:22:bf in /var/db/dhcpd_leases ...
	I0816 10:59:55.001519    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:59:55.001529    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:59:55.001545    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:59:55.001553    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:59:55.001562    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:59:55.001569    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:59:55.001576    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:59:55.001581    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:59:55.001600    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:59:55.001612    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:59:55.001619    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:59:55.001632    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:59:55.001640    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:59:55.001647    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:59:55.001655    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:59:55.001660    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:59:55.001667    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:59:55.001675    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:59:57.002918    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Attempt 11
	I0816 10:59:57.002941    7719 main.go:141] libmachine: (docker-flags-585000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:59:57.003049    7719 main.go:141] libmachine: (docker-flags-585000) DBG | hyperkit pid from json: 7743
	I0816 10:59:57.003839    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Searching for ae:0:ea:d7:22:bf in /var/db/dhcpd_leases ...
	I0816 10:59:57.003896    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:59:57.003907    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:59:57.003918    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:59:57.003934    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:59:57.003944    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:59:57.003953    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:59:57.003961    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:59:57.003968    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:59:57.003976    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:59:57.003987    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:59:57.003994    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:59:57.004011    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:59:57.004025    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:59:57.004037    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:59:57.004059    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:59:57.004091    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:59:57.004098    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:59:57.004112    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:59:59.004387    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Attempt 12
	I0816 10:59:59.004418    7719 main.go:141] libmachine: (docker-flags-585000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:59:59.004469    7719 main.go:141] libmachine: (docker-flags-585000) DBG | hyperkit pid from json: 7743
	I0816 10:59:59.005237    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Searching for ae:0:ea:d7:22:bf in /var/db/dhcpd_leases ...
	I0816 10:59:59.005292    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:59:59.005304    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:59:59.005314    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:59:59.005323    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:59:59.005330    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:59:59.005337    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:59:59.005358    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:59:59.005366    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:59:59.005375    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:59:59.005387    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:59:59.005395    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:59:59.005409    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:59:59.005417    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:59:59.005427    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:59:59.005435    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:59:59.005442    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:59:59.005450    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:59:59.005465    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 11:00:01.006116    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Attempt 13
	I0816 11:00:01.006129    7719 main.go:141] libmachine: (docker-flags-585000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:00:01.006218    7719 main.go:141] libmachine: (docker-flags-585000) DBG | hyperkit pid from json: 7743
	I0816 11:00:01.007014    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Searching for ae:0:ea:d7:22:bf in /var/db/dhcpd_leases ...
	I0816 11:00:01.007055    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 11:00:01.007066    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 11:00:01.007076    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 11:00:01.007082    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 11:00:01.007088    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 11:00:01.007094    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 11:00:01.007101    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 11:00:01.007108    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 11:00:01.007115    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 11:00:01.007122    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 11:00:01.007138    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 11:00:01.007148    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 11:00:01.007155    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 11:00:01.007165    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 11:00:01.007174    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 11:00:01.007181    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 11:00:01.007188    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 11:00:01.007199    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 11:00:03.007294    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Attempt 14
	I0816 11:00:03.007310    7719 main.go:141] libmachine: (docker-flags-585000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:00:03.007344    7719 main.go:141] libmachine: (docker-flags-585000) DBG | hyperkit pid from json: 7743
	I0816 11:00:03.008089    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Searching for ae:0:ea:d7:22:bf in /var/db/dhcpd_leases ...
	I0816 11:00:03.008141    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 11:00:03.008165    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 11:00:03.008177    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 11:00:03.008186    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 11:00:03.008193    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 11:00:03.008200    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 11:00:03.008205    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 11:00:03.008212    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 11:00:03.008220    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 11:00:03.008237    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 11:00:03.008248    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 11:00:03.008264    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 11:00:03.008273    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 11:00:03.008281    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 11:00:03.008289    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 11:00:03.008295    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 11:00:03.008302    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 11:00:03.008311    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 11:00:05.010286    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Attempt 15
	I0816 11:00:05.010311    7719 main.go:141] libmachine: (docker-flags-585000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:00:05.010378    7719 main.go:141] libmachine: (docker-flags-585000) DBG | hyperkit pid from json: 7743
	I0816 11:00:05.011263    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Searching for ae:0:ea:d7:22:bf in /var/db/dhcpd_leases ...
	I0816 11:00:05.011313    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 11:00:05.011327    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 11:00:05.011345    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 11:00:05.011355    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 11:00:05.011362    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 11:00:05.011368    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 11:00:05.011375    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 11:00:05.011383    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 11:00:05.011389    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 11:00:05.011397    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 11:00:05.011404    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 11:00:05.011409    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 11:00:05.011421    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 11:00:05.011434    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 11:00:05.011444    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 11:00:05.011452    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 11:00:05.011464    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 11:00:05.011475    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 11:00:07.012882    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Attempt 16
	I0816 11:00:07.012897    7719 main.go:141] libmachine: (docker-flags-585000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:00:07.012968    7719 main.go:141] libmachine: (docker-flags-585000) DBG | hyperkit pid from json: 7743
	I0816 11:00:07.013793    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Searching for ae:0:ea:d7:22:bf in /var/db/dhcpd_leases ...
	I0816 11:00:07.013802    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 11:00:07.013810    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 11:00:07.013817    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 11:00:07.013834    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 11:00:07.013847    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 11:00:07.013855    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 11:00:07.013861    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 11:00:07.013881    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 11:00:07.013894    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 11:00:07.013903    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 11:00:07.013912    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 11:00:07.013920    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 11:00:07.013928    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 11:00:07.013935    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 11:00:07.013941    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 11:00:07.013947    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 11:00:07.013954    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 11:00:07.013961    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 11:00:09.014799    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Attempt 17
	I0816 11:00:09.014812    7719 main.go:141] libmachine: (docker-flags-585000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:00:09.014869    7719 main.go:141] libmachine: (docker-flags-585000) DBG | hyperkit pid from json: 7743
	I0816 11:00:09.015772    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Searching for ae:0:ea:d7:22:bf in /var/db/dhcpd_leases ...
	I0816 11:00:09.015817    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 11:00:09.015828    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 11:00:09.015840    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 11:00:09.015847    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 11:00:09.015854    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 11:00:09.015862    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 11:00:09.015869    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 11:00:09.015876    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 11:00:09.015882    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 11:00:09.015892    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 11:00:09.015899    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 11:00:09.015906    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 11:00:09.015914    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 11:00:09.015921    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 11:00:09.015931    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 11:00:09.015938    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 11:00:09.015946    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 11:00:09.015961    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 11:00:11.016436    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Attempt 18
	I0816 11:00:11.016452    7719 main.go:141] libmachine: (docker-flags-585000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:00:11.016518    7719 main.go:141] libmachine: (docker-flags-585000) DBG | hyperkit pid from json: 7743
	I0816 11:00:11.017371    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Searching for ae:0:ea:d7:22:bf in /var/db/dhcpd_leases ...
	I0816 11:00:11.017430    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 11:00:11.017440    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 11:00:11.017447    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 11:00:11.017456    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 11:00:11.017472    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 11:00:11.017478    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 11:00:11.017506    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 11:00:11.017515    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 11:00:11.017523    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 11:00:11.017529    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 11:00:11.017535    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 11:00:11.017542    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 11:00:11.017549    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 11:00:11.017561    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 11:00:11.017569    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 11:00:11.017578    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 11:00:11.017586    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 11:00:11.017603    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 11:00:13.019559    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Attempt 19
	I0816 11:00:13.019588    7719 main.go:141] libmachine: (docker-flags-585000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:00:13.019645    7719 main.go:141] libmachine: (docker-flags-585000) DBG | hyperkit pid from json: 7743
	I0816 11:00:13.020757    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Searching for ae:0:ea:d7:22:bf in /var/db/dhcpd_leases ...
	I0816 11:00:13.020800    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 11:00:13.020817    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 11:00:13.020828    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 11:00:13.020838    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 11:00:13.020847    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 11:00:13.020854    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 11:00:13.020861    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 11:00:13.020869    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 11:00:13.020876    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 11:00:13.020891    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 11:00:13.020904    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 11:00:13.020912    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 11:00:13.020921    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 11:00:13.020935    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 11:00:13.020948    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 11:00:13.020964    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 11:00:13.020973    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 11:00:13.020982    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 11:00:15.020989    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Attempt 20
	I0816 11:00:15.021002    7719 main.go:141] libmachine: (docker-flags-585000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:00:15.021081    7719 main.go:141] libmachine: (docker-flags-585000) DBG | hyperkit pid from json: 7743
	I0816 11:00:15.021864    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Searching for ae:0:ea:d7:22:bf in /var/db/dhcpd_leases ...
	I0816 11:00:15.021931    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 11:00:15.021940    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 11:00:15.021949    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 11:00:15.021956    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 11:00:15.021963    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 11:00:15.021969    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 11:00:15.021977    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 11:00:15.021987    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 11:00:15.021994    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 11:00:15.022010    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 11:00:15.022026    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 11:00:15.022045    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 11:00:15.022059    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 11:00:15.022072    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 11:00:15.022080    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 11:00:15.022089    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 11:00:15.022102    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 11:00:15.022110    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 11:00:17.022219    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Attempt 21
	I0816 11:00:17.022233    7719 main.go:141] libmachine: (docker-flags-585000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:00:17.022300    7719 main.go:141] libmachine: (docker-flags-585000) DBG | hyperkit pid from json: 7743
	I0816 11:00:17.023110    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Searching for ae:0:ea:d7:22:bf in /var/db/dhcpd_leases ...
	I0816 11:00:17.023159    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 11:00:17.023170    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 11:00:17.023185    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 11:00:17.023194    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 11:00:17.023201    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 11:00:17.023208    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 11:00:17.023219    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 11:00:17.023228    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 11:00:17.023234    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 11:00:17.023241    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 11:00:17.023249    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 11:00:17.023256    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 11:00:17.023263    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 11:00:17.023276    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 11:00:17.023283    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 11:00:17.023292    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 11:00:17.023319    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 11:00:17.023331    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 11:00:19.025337    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Attempt 22
	I0816 11:00:19.025352    7719 main.go:141] libmachine: (docker-flags-585000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:00:19.025419    7719 main.go:141] libmachine: (docker-flags-585000) DBG | hyperkit pid from json: 7743
	I0816 11:00:19.026214    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Searching for ae:0:ea:d7:22:bf in /var/db/dhcpd_leases ...
	I0816 11:00:19.026275    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 11:00:19.026286    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 11:00:19.026293    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 11:00:19.026299    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 11:00:19.026330    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 11:00:19.026342    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 11:00:19.026349    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 11:00:19.026359    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 11:00:19.026368    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 11:00:19.026384    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 11:00:19.026394    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 11:00:19.026402    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 11:00:19.026410    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 11:00:19.026418    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 11:00:19.026425    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 11:00:19.026439    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 11:00:19.026452    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 11:00:19.026467    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 11:00:21.027118    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Attempt 23
	I0816 11:00:21.027130    7719 main.go:141] libmachine: (docker-flags-585000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:00:21.027217    7719 main.go:141] libmachine: (docker-flags-585000) DBG | hyperkit pid from json: 7743
	I0816 11:00:21.028083    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Searching for ae:0:ea:d7:22:bf in /var/db/dhcpd_leases ...
	I0816 11:00:21.028138    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 11:00:21.028150    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 11:00:21.028158    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 11:00:21.028166    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 11:00:21.028173    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 11:00:21.028181    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 11:00:21.028189    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 11:00:21.028205    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 11:00:21.028211    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 11:00:21.028219    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 11:00:21.028229    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 11:00:21.028241    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 11:00:21.028252    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 11:00:21.028260    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 11:00:21.028269    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 11:00:21.028284    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 11:00:21.028297    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 11:00:21.028307    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 11:00:23.028729    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Attempt 24
	I0816 11:00:23.028749    7719 main.go:141] libmachine: (docker-flags-585000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:00:23.028832    7719 main.go:141] libmachine: (docker-flags-585000) DBG | hyperkit pid from json: 7743
	I0816 11:00:23.029630    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Searching for ae:0:ea:d7:22:bf in /var/db/dhcpd_leases ...
	I0816 11:00:23.029679    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 11:00:23.029691    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 11:00:23.029701    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 11:00:23.029707    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 11:00:23.029714    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 11:00:23.029727    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 11:00:23.029734    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 11:00:23.029740    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 11:00:23.029753    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 11:00:23.029762    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 11:00:23.029768    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 11:00:23.029775    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 11:00:23.029783    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 11:00:23.029792    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 11:00:23.029806    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 11:00:23.029819    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 11:00:23.029838    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 11:00:23.029851    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 11:00:25.030841    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Attempt 25
	I0816 11:00:25.030856    7719 main.go:141] libmachine: (docker-flags-585000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:00:25.030923    7719 main.go:141] libmachine: (docker-flags-585000) DBG | hyperkit pid from json: 7743
	I0816 11:00:25.031722    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Searching for ae:0:ea:d7:22:bf in /var/db/dhcpd_leases ...
	I0816 11:00:25.031761    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 11:00:25.031776    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 11:00:25.031794    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 11:00:25.031801    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 11:00:25.031808    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 11:00:25.031817    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 11:00:25.031824    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 11:00:25.031831    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 11:00:25.031838    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 11:00:25.031847    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 11:00:25.031853    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 11:00:25.031860    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 11:00:25.031867    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 11:00:25.031874    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 11:00:25.031888    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 11:00:25.031902    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 11:00:25.031910    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 11:00:25.031918    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 11:00:27.033888    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Attempt 26
	I0816 11:00:27.033901    7719 main.go:141] libmachine: (docker-flags-585000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:00:27.033954    7719 main.go:141] libmachine: (docker-flags-585000) DBG | hyperkit pid from json: 7743
	I0816 11:00:27.034770    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Searching for ae:0:ea:d7:22:bf in /var/db/dhcpd_leases ...
	I0816 11:00:27.034823    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 11:00:27.034832    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 11:00:27.034839    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 11:00:27.034844    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 11:00:27.034851    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 11:00:27.034856    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 11:00:27.034863    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 11:00:27.034871    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 11:00:27.034884    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 11:00:27.034892    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 11:00:27.034899    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 11:00:27.034906    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 11:00:27.034912    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 11:00:27.034922    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 11:00:27.034928    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 11:00:27.034934    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 11:00:27.034941    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 11:00:27.034951    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 11:00:29.035273    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Attempt 27
	I0816 11:00:29.035284    7719 main.go:141] libmachine: (docker-flags-585000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:00:29.035343    7719 main.go:141] libmachine: (docker-flags-585000) DBG | hyperkit pid from json: 7743
	I0816 11:00:29.036159    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Searching for ae:0:ea:d7:22:bf in /var/db/dhcpd_leases ...
	I0816 11:00:29.036200    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 11:00:29.036212    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 11:00:29.036226    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 11:00:29.036236    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 11:00:29.036256    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 11:00:29.036264    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 11:00:29.036275    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 11:00:29.036283    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 11:00:29.036290    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 11:00:29.036298    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 11:00:29.036304    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 11:00:29.036311    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 11:00:29.036318    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 11:00:29.036325    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 11:00:29.036333    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 11:00:29.036341    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 11:00:29.036350    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 11:00:29.036358    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 11:00:31.038066    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Attempt 28
	I0816 11:00:31.038076    7719 main.go:141] libmachine: (docker-flags-585000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:00:31.038129    7719 main.go:141] libmachine: (docker-flags-585000) DBG | hyperkit pid from json: 7743
	I0816 11:00:31.038906    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Searching for ae:0:ea:d7:22:bf in /var/db/dhcpd_leases ...
	I0816 11:00:31.038963    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 11:00:31.038980    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 11:00:31.039016    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 11:00:31.039028    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 11:00:31.039041    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 11:00:31.039050    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 11:00:31.039060    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 11:00:31.039070    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 11:00:31.039077    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 11:00:31.039084    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 11:00:31.039107    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 11:00:31.039117    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 11:00:31.039126    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 11:00:31.039133    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 11:00:31.039139    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 11:00:31.039155    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 11:00:31.039166    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 11:00:31.039177    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 11:00:33.041057    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Attempt 29
	I0816 11:00:33.041078    7719 main.go:141] libmachine: (docker-flags-585000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:00:33.041118    7719 main.go:141] libmachine: (docker-flags-585000) DBG | hyperkit pid from json: 7743
	I0816 11:00:33.041967    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Searching for ae:0:ea:d7:22:bf in /var/db/dhcpd_leases ...
	I0816 11:00:33.042008    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 11:00:33.042015    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 11:00:33.042024    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 11:00:33.042033    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 11:00:33.042040    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 11:00:33.042046    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 11:00:33.042052    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 11:00:33.042059    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 11:00:33.042066    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 11:00:33.042074    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 11:00:33.042081    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 11:00:33.042088    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 11:00:33.042095    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 11:00:33.042107    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 11:00:33.042114    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 11:00:33.042120    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 11:00:33.042135    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 11:00:33.042148    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 11:00:35.043100    7719 client.go:171] duration metric: took 1m0.822972904s to LocalClient.Create
	I0816 11:00:37.043920    7719 start.go:128] duration metric: took 1m2.856067254s to createHost
	I0816 11:00:37.043939    7719 start.go:83] releasing machines lock for "docker-flags-585000", held for 1m2.856198033s
	W0816 11:00:37.043951    7719 start.go:714] error starting host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ae:0:ea:d7:22:bf
	I0816 11:00:37.044316    7719 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 11:00:37.044355    7719 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 11:00:37.053577    7719 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53898
	I0816 11:00:37.053972    7719 main.go:141] libmachine: () Calling .GetVersion
	I0816 11:00:37.054429    7719 main.go:141] libmachine: Using API Version  1
	I0816 11:00:37.054439    7719 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 11:00:37.054674    7719 main.go:141] libmachine: () Calling .GetMachineName
	I0816 11:00:37.055131    7719 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 11:00:37.055172    7719 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 11:00:37.063900    7719 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53900
	I0816 11:00:37.064361    7719 main.go:141] libmachine: () Calling .GetVersion
	I0816 11:00:37.064923    7719 main.go:141] libmachine: Using API Version  1
	I0816 11:00:37.064940    7719 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 11:00:37.065225    7719 main.go:141] libmachine: () Calling .GetMachineName
	I0816 11:00:37.065360    7719 main.go:141] libmachine: (docker-flags-585000) Calling .GetState
	I0816 11:00:37.065473    7719 main.go:141] libmachine: (docker-flags-585000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:00:37.065547    7719 main.go:141] libmachine: (docker-flags-585000) DBG | hyperkit pid from json: 7743
	I0816 11:00:37.066518    7719 main.go:141] libmachine: (docker-flags-585000) Calling .DriverName
	I0816 11:00:37.087263    7719 out.go:177] * Deleting "docker-flags-585000" in hyperkit ...
	I0816 11:00:37.129285    7719 main.go:141] libmachine: (docker-flags-585000) Calling .Remove
	I0816 11:00:37.129406    7719 main.go:141] libmachine: (docker-flags-585000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:00:37.129416    7719 main.go:141] libmachine: (docker-flags-585000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:00:37.129494    7719 main.go:141] libmachine: (docker-flags-585000) DBG | hyperkit pid from json: 7743
	I0816 11:00:37.130442    7719 main.go:141] libmachine: (docker-flags-585000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:00:37.130490    7719 main.go:141] libmachine: (docker-flags-585000) DBG | waiting for graceful shutdown
	I0816 11:00:38.131348    7719 main.go:141] libmachine: (docker-flags-585000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:00:38.131526    7719 main.go:141] libmachine: (docker-flags-585000) DBG | hyperkit pid from json: 7743
	I0816 11:00:38.132393    7719 main.go:141] libmachine: (docker-flags-585000) DBG | waiting for graceful shutdown
	I0816 11:00:39.134505    7719 main.go:141] libmachine: (docker-flags-585000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:00:39.134579    7719 main.go:141] libmachine: (docker-flags-585000) DBG | hyperkit pid from json: 7743
	I0816 11:00:39.136120    7719 main.go:141] libmachine: (docker-flags-585000) DBG | waiting for graceful shutdown
	I0816 11:00:40.136779    7719 main.go:141] libmachine: (docker-flags-585000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:00:40.136898    7719 main.go:141] libmachine: (docker-flags-585000) DBG | hyperkit pid from json: 7743
	I0816 11:00:40.137500    7719 main.go:141] libmachine: (docker-flags-585000) DBG | waiting for graceful shutdown
	I0816 11:00:41.139157    7719 main.go:141] libmachine: (docker-flags-585000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:00:41.139236    7719 main.go:141] libmachine: (docker-flags-585000) DBG | hyperkit pid from json: 7743
	I0816 11:00:41.139807    7719 main.go:141] libmachine: (docker-flags-585000) DBG | waiting for graceful shutdown
	I0816 11:00:42.141770    7719 main.go:141] libmachine: (docker-flags-585000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:00:42.141825    7719 main.go:141] libmachine: (docker-flags-585000) DBG | hyperkit pid from json: 7743
	I0816 11:00:42.142879    7719 main.go:141] libmachine: (docker-flags-585000) DBG | sending sigkill
	I0816 11:00:42.142889    7719 main.go:141] libmachine: (docker-flags-585000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	W0816 11:00:42.155664    7719 out.go:270] ! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ae:0:ea:d7:22:bf
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ae:0:ea:d7:22:bf
	I0816 11:00:42.155682    7719 start.go:729] Will try again in 5 seconds ...
	I0816 11:00:42.165886    7719 main.go:141] libmachine: (docker-flags-585000) DBG | 2024/08/16 11:00:42 WARN : hyperkit: failed to read stderr: EOF
	I0816 11:00:42.165906    7719 main.go:141] libmachine: (docker-flags-585000) DBG | 2024/08/16 11:00:42 WARN : hyperkit: failed to read stdout: EOF
	I0816 11:00:47.157602    7719 start.go:360] acquireMachinesLock for docker-flags-585000: {Name:mk0510f0432b1e24fd07d899155e85dff8756d5a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 11:01:39.990209    7719 start.go:364] duration metric: took 52.834204465s to acquireMachinesLock for "docker-flags-585000"
	I0816 11:01:39.990244    7719 start.go:93] Provisioning new machine with config: &{Name:docker-flags-585000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSH
Key: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:docker-flags-585000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 11:01:39.990312    7719 start.go:125] createHost starting for "" (driver="hyperkit")
	I0816 11:01:40.032444    7719 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0816 11:01:40.032539    7719 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 11:01:40.032561    7719 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 11:01:40.041632    7719 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53907
	I0816 11:01:40.042154    7719 main.go:141] libmachine: () Calling .GetVersion
	I0816 11:01:40.042682    7719 main.go:141] libmachine: Using API Version  1
	I0816 11:01:40.042707    7719 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 11:01:40.043076    7719 main.go:141] libmachine: () Calling .GetMachineName
	I0816 11:01:40.043227    7719 main.go:141] libmachine: (docker-flags-585000) Calling .GetMachineName
	I0816 11:01:40.043323    7719 main.go:141] libmachine: (docker-flags-585000) Calling .DriverName
	I0816 11:01:40.043486    7719 start.go:159] libmachine.API.Create for "docker-flags-585000" (driver="hyperkit")
	I0816 11:01:40.043501    7719 client.go:168] LocalClient.Create starting
	I0816 11:01:40.043529    7719 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem
	I0816 11:01:40.043581    7719 main.go:141] libmachine: Decoding PEM data...
	I0816 11:01:40.043594    7719 main.go:141] libmachine: Parsing certificate...
	I0816 11:01:40.043643    7719 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem
	I0816 11:01:40.043684    7719 main.go:141] libmachine: Decoding PEM data...
	I0816 11:01:40.043696    7719 main.go:141] libmachine: Parsing certificate...
	I0816 11:01:40.043709    7719 main.go:141] libmachine: Running pre-create checks...
	I0816 11:01:40.043714    7719 main.go:141] libmachine: (docker-flags-585000) Calling .PreCreateCheck
	I0816 11:01:40.043839    7719 main.go:141] libmachine: (docker-flags-585000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:01:40.043858    7719 main.go:141] libmachine: (docker-flags-585000) Calling .GetConfigRaw
	I0816 11:01:40.053713    7719 main.go:141] libmachine: Creating machine...
	I0816 11:01:40.053724    7719 main.go:141] libmachine: (docker-flags-585000) Calling .Create
	I0816 11:01:40.053844    7719 main.go:141] libmachine: (docker-flags-585000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:01:40.054046    7719 main.go:141] libmachine: (docker-flags-585000) DBG | I0816 11:01:40.053876    8058 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19461-1276/.minikube
	I0816 11:01:40.054162    7719 main.go:141] libmachine: (docker-flags-585000) Downloading /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19461-1276/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0816 11:01:40.472988    7719 main.go:141] libmachine: (docker-flags-585000) DBG | I0816 11:01:40.472890    8058 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/docker-flags-585000/id_rsa...
	I0816 11:01:40.653772    7719 main.go:141] libmachine: (docker-flags-585000) DBG | I0816 11:01:40.653725    8058 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/docker-flags-585000/docker-flags-585000.rawdisk...
	I0816 11:01:40.653786    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Writing magic tar header
	I0816 11:01:40.653809    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Writing SSH key tar header
	I0816 11:01:40.654226    7719 main.go:141] libmachine: (docker-flags-585000) DBG | I0816 11:01:40.654161    8058 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/docker-flags-585000 ...
	I0816 11:01:41.030598    7719 main.go:141] libmachine: (docker-flags-585000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:01:41.030617    7719 main.go:141] libmachine: (docker-flags-585000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/docker-flags-585000/hyperkit.pid
	I0816 11:01:41.030679    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Using UUID 77b29e0c-bf7b-4c30-999d-20ab5c133d68
	I0816 11:01:41.057226    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Generated MAC f6:6:3f:b1:b2:c7
	I0816 11:01:41.057252    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-585000
	I0816 11:01:41.057317    7719 main.go:141] libmachine: (docker-flags-585000) DBG | 2024/08/16 11:01:41 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/docker-flags-585000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"77b29e0c-bf7b-4c30-999d-20ab5c133d68", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000224330)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/docker-flags-585000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/docker-flags-585000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/docker-flags-585000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pro
cess:(*os.Process)(nil)}
	I0816 11:01:41.057360    7719 main.go:141] libmachine: (docker-flags-585000) DBG | 2024/08/16 11:01:41 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/docker-flags-585000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"77b29e0c-bf7b-4c30-999d-20ab5c133d68", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000224330)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/docker-flags-585000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/docker-flags-585000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/docker-flags-585000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pro
cess:(*os.Process)(nil)}
	I0816 11:01:41.057405    7719 main.go:141] libmachine: (docker-flags-585000) DBG | 2024/08/16 11:01:41 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/docker-flags-585000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "77b29e0c-bf7b-4c30-999d-20ab5c133d68", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/docker-flags-585000/docker-flags-585000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/docker-flags-585000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/docker-flags-585000/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/docker-flags-585000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/docker-flags-585000/bzimage,/Users/jenkins/m
inikube-integration/19461-1276/.minikube/machines/docker-flags-585000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-585000"}
	I0816 11:01:41.057458    7719 main.go:141] libmachine: (docker-flags-585000) DBG | 2024/08/16 11:01:41 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/docker-flags-585000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 77b29e0c-bf7b-4c30-999d-20ab5c133d68 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/docker-flags-585000/docker-flags-585000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/docker-flags-585000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/docker-flags-585000/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/docker-flags-585000/console-ring -f kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/docker-flags-585000/bzimage,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/docker-flags
-585000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-585000"
	I0816 11:01:41.057470    7719 main.go:141] libmachine: (docker-flags-585000) DBG | 2024/08/16 11:01:41 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0816 11:01:41.060654    7719 main.go:141] libmachine: (docker-flags-585000) DBG | 2024/08/16 11:01:41 DEBUG: hyperkit: Pid is 8072
	I0816 11:01:41.061334    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Attempt 0
	I0816 11:01:41.061352    7719 main.go:141] libmachine: (docker-flags-585000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:01:41.061462    7719 main.go:141] libmachine: (docker-flags-585000) DBG | hyperkit pid from json: 8072
	I0816 11:01:41.062777    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Searching for f6:6:3f:b1:b2:c7 in /var/db/dhcpd_leases ...
	I0816 11:01:41.062890    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 11:01:41.062921    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 11:01:41.062942    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 11:01:41.062969    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 11:01:41.062984    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 11:01:41.063007    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 11:01:41.063030    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 11:01:41.063043    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 11:01:41.063057    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 11:01:41.063106    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 11:01:41.063131    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 11:01:41.063188    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 11:01:41.063228    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 11:01:41.063246    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 11:01:41.063261    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 11:01:41.063274    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 11:01:41.063289    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 11:01:41.063303    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 11:01:41.069237    7719 main.go:141] libmachine: (docker-flags-585000) DBG | 2024/08/16 11:01:41 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0816 11:01:41.077348    7719 main.go:141] libmachine: (docker-flags-585000) DBG | 2024/08/16 11:01:41 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/docker-flags-585000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0816 11:01:41.078222    7719 main.go:141] libmachine: (docker-flags-585000) DBG | 2024/08/16 11:01:41 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 11:01:41.078242    7719 main.go:141] libmachine: (docker-flags-585000) DBG | 2024/08/16 11:01:41 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 11:01:41.078271    7719 main.go:141] libmachine: (docker-flags-585000) DBG | 2024/08/16 11:01:41 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 11:01:41.078291    7719 main.go:141] libmachine: (docker-flags-585000) DBG | 2024/08/16 11:01:41 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 11:01:41.456118    7719 main.go:141] libmachine: (docker-flags-585000) DBG | 2024/08/16 11:01:41 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0816 11:01:41.456134    7719 main.go:141] libmachine: (docker-flags-585000) DBG | 2024/08/16 11:01:41 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0816 11:01:41.570680    7719 main.go:141] libmachine: (docker-flags-585000) DBG | 2024/08/16 11:01:41 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 11:01:41.570700    7719 main.go:141] libmachine: (docker-flags-585000) DBG | 2024/08/16 11:01:41 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 11:01:41.570734    7719 main.go:141] libmachine: (docker-flags-585000) DBG | 2024/08/16 11:01:41 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 11:01:41.570775    7719 main.go:141] libmachine: (docker-flags-585000) DBG | 2024/08/16 11:01:41 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 11:01:41.571639    7719 main.go:141] libmachine: (docker-flags-585000) DBG | 2024/08/16 11:01:41 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0816 11:01:41.571657    7719 main.go:141] libmachine: (docker-flags-585000) DBG | 2024/08/16 11:01:41 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0816 11:01:43.065084    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Attempt 1
	I0816 11:01:43.065101    7719 main.go:141] libmachine: (docker-flags-585000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:01:43.065200    7719 main.go:141] libmachine: (docker-flags-585000) DBG | hyperkit pid from json: 8072
	I0816 11:01:43.066004    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Searching for f6:6:3f:b1:b2:c7 in /var/db/dhcpd_leases ...
	I0816 11:01:43.066057    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 11:01:43.066068    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 11:01:43.066094    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 11:01:43.066104    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 11:01:43.066123    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 11:01:43.066139    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 11:01:43.066147    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 11:01:43.066155    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 11:01:43.066169    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 11:01:43.066183    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 11:01:43.066192    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 11:01:43.066200    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 11:01:43.066207    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 11:01:43.066215    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 11:01:43.066224    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 11:01:43.066232    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 11:01:43.066239    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 11:01:43.066246    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 11:01:45.068125    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Attempt 2
	I0816 11:01:45.068138    7719 main.go:141] libmachine: (docker-flags-585000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:01:45.068254    7719 main.go:141] libmachine: (docker-flags-585000) DBG | hyperkit pid from json: 8072
	I0816 11:01:45.069207    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Searching for f6:6:3f:b1:b2:c7 in /var/db/dhcpd_leases ...
	I0816 11:01:45.069280    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 11:01:45.069289    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 11:01:45.069298    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 11:01:45.069307    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 11:01:45.069314    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 11:01:45.069322    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 11:01:45.069329    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 11:01:45.069339    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 11:01:45.069345    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 11:01:45.069354    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 11:01:45.069361    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 11:01:45.069366    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 11:01:45.069378    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 11:01:45.069393    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 11:01:45.069403    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 11:01:45.069412    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 11:01:45.069423    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 11:01:45.069430    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 11:01:46.950728    7719 main.go:141] libmachine: (docker-flags-585000) DBG | 2024/08/16 11:01:46 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0816 11:01:46.950996    7719 main.go:141] libmachine: (docker-flags-585000) DBG | 2024/08/16 11:01:46 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0816 11:01:46.951007    7719 main.go:141] libmachine: (docker-flags-585000) DBG | 2024/08/16 11:01:46 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0816 11:01:46.970733    7719 main.go:141] libmachine: (docker-flags-585000) DBG | 2024/08/16 11:01:46 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0816 11:01:47.070075    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Attempt 3
	I0816 11:01:47.070105    7719 main.go:141] libmachine: (docker-flags-585000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:01:47.070313    7719 main.go:141] libmachine: (docker-flags-585000) DBG | hyperkit pid from json: 8072
	I0816 11:01:47.071828    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Searching for f6:6:3f:b1:b2:c7 in /var/db/dhcpd_leases ...
	I0816 11:01:47.071918    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 11:01:47.071943    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 11:01:47.071979    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 11:01:47.071996    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 11:01:47.072010    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 11:01:47.072027    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 11:01:47.072086    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 11:01:47.072172    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 11:01:47.072183    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 11:01:47.072194    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 11:01:47.072205    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 11:01:47.072219    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 11:01:47.072229    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 11:01:47.072239    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 11:01:47.072252    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 11:01:47.072263    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 11:01:47.072272    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 11:01:47.072283    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 11:01:49.072129    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Attempt 4
	I0816 11:01:49.072145    7719 main.go:141] libmachine: (docker-flags-585000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:01:49.072241    7719 main.go:141] libmachine: (docker-flags-585000) DBG | hyperkit pid from json: 8072
	I0816 11:01:49.073020    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Searching for f6:6:3f:b1:b2:c7 in /var/db/dhcpd_leases ...
	I0816 11:01:49.073081    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 11:01:49.073092    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 11:01:49.073102    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 11:01:49.073108    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 11:01:49.073115    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 11:01:49.073122    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 11:01:49.073152    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 11:01:49.073161    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 11:01:49.073171    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 11:01:49.073180    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 11:01:49.073194    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 11:01:49.073206    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 11:01:49.073216    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 11:01:49.073224    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 11:01:49.073248    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 11:01:49.073260    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 11:01:49.073268    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 11:01:49.073277    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 11:01:51.075203    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Attempt 5
	I0816 11:01:51.075215    7719 main.go:141] libmachine: (docker-flags-585000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:01:51.075265    7719 main.go:141] libmachine: (docker-flags-585000) DBG | hyperkit pid from json: 8072
	I0816 11:01:51.076147    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Searching for f6:6:3f:b1:b2:c7 in /var/db/dhcpd_leases ...
	I0816 11:01:51.076178    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 11:01:51.076196    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 11:01:51.076218    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 11:01:51.076226    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 11:01:51.076233    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 11:01:51.076253    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 11:01:51.076266    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 11:01:51.076274    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 11:01:51.076289    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 11:01:51.076297    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 11:01:51.076303    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 11:01:51.076320    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 11:01:51.076333    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 11:01:51.076345    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 11:01:51.076353    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 11:01:51.076360    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 11:01:51.076368    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 11:01:51.076383    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 11:01:53.077044    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Attempt 6
	I0816 11:01:53.077057    7719 main.go:141] libmachine: (docker-flags-585000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:01:53.077128    7719 main.go:141] libmachine: (docker-flags-585000) DBG | hyperkit pid from json: 8072
	I0816 11:01:53.077999    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Searching for f6:6:3f:b1:b2:c7 in /var/db/dhcpd_leases ...
	I0816 11:01:53.078040    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 11:01:53.078058    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 11:01:53.078067    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 11:01:53.078080    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 11:01:53.078087    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 11:01:53.078102    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 11:01:53.078114    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 11:01:53.078122    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 11:01:53.078131    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 11:01:53.078138    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 11:01:53.078146    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 11:01:53.078153    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 11:01:53.078161    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 11:01:53.078167    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 11:01:53.078175    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 11:01:53.078186    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 11:01:53.078194    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 11:01:53.078202    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 11:01:55.078878    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Attempt 7
	I0816 11:01:55.078891    7719 main.go:141] libmachine: (docker-flags-585000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:01:55.078954    7719 main.go:141] libmachine: (docker-flags-585000) DBG | hyperkit pid from json: 8072
	I0816 11:01:55.079833    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Searching for f6:6:3f:b1:b2:c7 in /var/db/dhcpd_leases ...
	I0816 11:01:55.079869    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 11:01:55.079881    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 11:01:55.079895    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 11:01:55.079906    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 11:01:55.079917    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 11:01:55.079924    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 11:01:55.079940    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 11:01:55.079948    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 11:01:55.079956    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 11:01:55.079963    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 11:01:55.079977    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 11:01:55.079987    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 11:01:55.079994    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 11:01:55.080000    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 11:01:55.080008    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 11:01:55.080014    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 11:01:55.080020    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 11:01:55.080025    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 11:01:57.082000    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Attempt 8
	I0816 11:01:57.082012    7719 main.go:141] libmachine: (docker-flags-585000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:01:57.082065    7719 main.go:141] libmachine: (docker-flags-585000) DBG | hyperkit pid from json: 8072
	I0816 11:01:57.082898    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Searching for f6:6:3f:b1:b2:c7 in /var/db/dhcpd_leases ...
	I0816 11:01:57.082944    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 11:01:57.082965    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 11:01:57.082979    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 11:01:57.083018    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 11:01:57.083035    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 11:01:57.083051    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 11:01:57.083060    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 11:01:57.083067    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 11:01:57.083078    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 11:01:57.083095    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 11:01:57.083107    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 11:01:57.083116    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 11:01:57.083124    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 11:01:57.083131    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 11:01:57.083141    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 11:01:57.083156    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 11:01:57.083167    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 11:01:57.083176    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 11:01:59.083151    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Attempt 9
	I0816 11:01:59.083162    7719 main.go:141] libmachine: (docker-flags-585000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:01:59.083234    7719 main.go:141] libmachine: (docker-flags-585000) DBG | hyperkit pid from json: 8072
	I0816 11:01:59.084068    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Searching for f6:6:3f:b1:b2:c7 in /var/db/dhcpd_leases ...
	I0816 11:01:59.084087    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 11:01:59.084099    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 11:01:59.084116    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 11:01:59.084125    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 11:01:59.084134    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 11:01:59.084140    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 11:01:59.084147    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 11:01:59.084159    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 11:01:59.084167    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 11:01:59.084177    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 11:01:59.084184    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 11:01:59.084192    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 11:01:59.084199    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 11:01:59.084206    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 11:01:59.084228    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 11:01:59.084240    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 11:01:59.084248    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 11:01:59.084257    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 11:02:01.084242    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Attempt 10
	I0816 11:02:01.084261    7719 main.go:141] libmachine: (docker-flags-585000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:02:01.084331    7719 main.go:141] libmachine: (docker-flags-585000) DBG | hyperkit pid from json: 8072
	I0816 11:02:01.085104    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Searching for f6:6:3f:b1:b2:c7 in /var/db/dhcpd_leases ...
	I0816 11:02:01.085145    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 11:02:01.085155    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 11:02:01.085168    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 11:02:01.085179    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 11:02:01.085186    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 11:02:01.085192    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 11:02:01.085199    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 11:02:01.085204    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 11:02:01.085210    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 11:02:01.085217    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 11:02:01.085228    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 11:02:01.085242    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 11:02:01.085253    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 11:02:01.085261    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 11:02:01.085281    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 11:02:01.085290    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 11:02:01.085297    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 11:02:01.085304    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 11:02:03.086398    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Attempt 11
	I0816 11:02:03.086414    7719 main.go:141] libmachine: (docker-flags-585000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:02:03.086487    7719 main.go:141] libmachine: (docker-flags-585000) DBG | hyperkit pid from json: 8072
	I0816 11:02:03.087279    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Searching for f6:6:3f:b1:b2:c7 in /var/db/dhcpd_leases ...
	I0816 11:02:03.087341    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 11:02:03.087352    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 11:02:03.087365    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 11:02:03.087382    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 11:02:03.087391    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 11:02:03.087397    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 11:02:03.087415    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 11:02:03.087422    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 11:02:03.087428    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 11:02:03.087435    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 11:02:03.087443    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 11:02:03.087459    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 11:02:03.087466    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 11:02:03.087473    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 11:02:03.087487    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 11:02:03.087495    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 11:02:03.087509    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 11:02:03.087518    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 11:02:05.087491    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Attempt 12
	I0816 11:02:05.087505    7719 main.go:141] libmachine: (docker-flags-585000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:02:05.087635    7719 main.go:141] libmachine: (docker-flags-585000) DBG | hyperkit pid from json: 8072
	I0816 11:02:05.088448    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Searching for f6:6:3f:b1:b2:c7 in /var/db/dhcpd_leases ...
	I0816 11:02:05.088497    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 11:02:05.088513    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 11:02:05.088522    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 11:02:05.088532    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 11:02:05.088538    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 11:02:05.088545    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 11:02:05.088568    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 11:02:05.088581    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 11:02:05.088604    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 11:02:05.088620    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 11:02:05.088632    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 11:02:05.088641    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 11:02:05.088658    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 11:02:05.088676    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 11:02:05.088686    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 11:02:05.088694    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 11:02:05.088702    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 11:02:05.088711    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 11:02:07.089265    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Attempt 13
	I0816 11:02:07.089278    7719 main.go:141] libmachine: (docker-flags-585000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:02:07.089347    7719 main.go:141] libmachine: (docker-flags-585000) DBG | hyperkit pid from json: 8072
	I0816 11:02:07.090179    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Searching for f6:6:3f:b1:b2:c7 in /var/db/dhcpd_leases ...
	I0816 11:02:07.090223    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 11:02:07.090233    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 11:02:07.090248    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 11:02:07.090263    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 11:02:07.090272    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 11:02:07.090280    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 11:02:07.090287    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 11:02:07.090294    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 11:02:07.090300    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 11:02:07.090316    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 11:02:07.090332    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 11:02:07.090345    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 11:02:07.090356    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 11:02:07.090373    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 11:02:07.090382    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 11:02:07.090390    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 11:02:07.090395    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 11:02:07.090414    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 11:02:09.091341    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Attempt 14
	I0816 11:02:09.091358    7719 main.go:141] libmachine: (docker-flags-585000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:02:09.091418    7719 main.go:141] libmachine: (docker-flags-585000) DBG | hyperkit pid from json: 8072
	I0816 11:02:09.092230    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Searching for f6:6:3f:b1:b2:c7 in /var/db/dhcpd_leases ...
	I0816 11:02:09.092278    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 11:02:09.092290    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 11:02:09.092297    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 11:02:09.092304    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 11:02:09.092317    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 11:02:09.092325    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 11:02:09.092332    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 11:02:09.092338    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 11:02:09.092345    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 11:02:09.092352    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 11:02:09.092364    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 11:02:09.092376    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 11:02:09.092387    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 11:02:09.092397    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 11:02:09.092415    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 11:02:09.092428    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 11:02:09.092437    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 11:02:09.092446    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 11:02:11.093387    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Attempt 15
	I0816 11:02:11.093403    7719 main.go:141] libmachine: (docker-flags-585000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:02:11.093431    7719 main.go:141] libmachine: (docker-flags-585000) DBG | hyperkit pid from json: 8072
	I0816 11:02:11.094269    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Searching for f6:6:3f:b1:b2:c7 in /var/db/dhcpd_leases ...
	I0816 11:02:11.094311    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 11:02:11.094322    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 11:02:11.094344    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 11:02:11.094353    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 11:02:11.094365    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 11:02:11.094371    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 11:02:11.094377    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 11:02:11.094386    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 11:02:11.094404    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 11:02:11.094413    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 11:02:11.094420    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 11:02:11.094428    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 11:02:11.094441    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 11:02:11.094449    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 11:02:11.094458    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 11:02:11.094466    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 11:02:11.094473    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 11:02:11.094481    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 11:02:13.094496    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Attempt 16
	I0816 11:02:13.094509    7719 main.go:141] libmachine: (docker-flags-585000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:02:13.094570    7719 main.go:141] libmachine: (docker-flags-585000) DBG | hyperkit pid from json: 8072
	I0816 11:02:13.095489    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Searching for f6:6:3f:b1:b2:c7 in /var/db/dhcpd_leases ...
	I0816 11:02:13.095525    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 11:02:13.095536    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 11:02:13.095557    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 11:02:13.095571    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 11:02:13.095584    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 11:02:13.095591    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 11:02:13.095600    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 11:02:13.095607    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 11:02:13.095613    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 11:02:13.095634    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 11:02:13.095646    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 11:02:13.095656    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 11:02:13.095663    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 11:02:13.095671    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 11:02:13.095678    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 11:02:13.095686    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 11:02:13.095711    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 11:02:13.095724    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 11:02:15.097586    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Attempt 17
	I0816 11:02:15.097600    7719 main.go:141] libmachine: (docker-flags-585000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:02:15.097655    7719 main.go:141] libmachine: (docker-flags-585000) DBG | hyperkit pid from json: 8072
	I0816 11:02:15.098474    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Searching for f6:6:3f:b1:b2:c7 in /var/db/dhcpd_leases ...
	I0816 11:02:15.098487    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 11:02:15.098498    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 11:02:15.098507    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 11:02:15.098517    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 11:02:15.098525    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 11:02:15.098545    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 11:02:15.098554    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 11:02:15.098570    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 11:02:15.098633    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 11:02:15.098653    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 11:02:15.098666    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 11:02:15.098674    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 11:02:15.098683    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 11:02:15.098690    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 11:02:15.098698    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 11:02:15.098704    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 11:02:15.098712    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 11:02:15.098735    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 11:02:17.098796    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Attempt 18
	I0816 11:02:17.098811    7719 main.go:141] libmachine: (docker-flags-585000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:02:17.098872    7719 main.go:141] libmachine: (docker-flags-585000) DBG | hyperkit pid from json: 8072
	I0816 11:02:17.099798    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Searching for f6:6:3f:b1:b2:c7 in /var/db/dhcpd_leases ...
	I0816 11:02:17.099838    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 11:02:17.099850    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 11:02:17.099869    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 11:02:17.099888    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 11:02:17.099900    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 11:02:17.099910    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 11:02:17.099918    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 11:02:17.099927    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 11:02:17.099939    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 11:02:17.099947    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 11:02:17.099965    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 11:02:17.099974    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 11:02:17.099980    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 11:02:17.099988    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 11:02:17.099998    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 11:02:17.100006    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 11:02:17.100015    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 11:02:17.100021    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 11:02:19.101978    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Attempt 19
	I0816 11:02:19.101993    7719 main.go:141] libmachine: (docker-flags-585000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:02:19.102048    7719 main.go:141] libmachine: (docker-flags-585000) DBG | hyperkit pid from json: 8072
	I0816 11:02:19.102836    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Searching for f6:6:3f:b1:b2:c7 in /var/db/dhcpd_leases ...
	I0816 11:02:19.102879    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 11:02:19.102893    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 11:02:19.102917    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 11:02:19.102929    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 11:02:19.102935    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 11:02:19.102950    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 11:02:19.102959    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 11:02:19.102966    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 11:02:19.102973    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 11:02:19.102979    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 11:02:19.102985    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 11:02:19.102994    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 11:02:19.103003    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 11:02:19.103009    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 11:02:19.103016    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 11:02:19.103021    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 11:02:19.103037    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 11:02:19.103048    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 11:02:21.104952    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Attempt 20
	I0816 11:02:21.104967    7719 main.go:141] libmachine: (docker-flags-585000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:02:21.105039    7719 main.go:141] libmachine: (docker-flags-585000) DBG | hyperkit pid from json: 8072
	I0816 11:02:21.105819    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Searching for f6:6:3f:b1:b2:c7 in /var/db/dhcpd_leases ...
	I0816 11:02:21.105855    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 11:02:21.105865    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 11:02:21.105876    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 11:02:21.105883    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 11:02:21.105891    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 11:02:21.105897    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 11:02:21.105903    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 11:02:21.105911    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 11:02:21.105918    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 11:02:21.105924    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 11:02:21.105931    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 11:02:21.105937    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 11:02:21.105945    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 11:02:21.105952    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 11:02:21.105958    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 11:02:21.105965    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 11:02:21.105973    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 11:02:21.105981    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 11:02:23.107983    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Attempt 21
	I0816 11:02:23.107995    7719 main.go:141] libmachine: (docker-flags-585000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:02:23.108069    7719 main.go:141] libmachine: (docker-flags-585000) DBG | hyperkit pid from json: 8072
	I0816 11:02:23.108883    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Searching for f6:6:3f:b1:b2:c7 in /var/db/dhcpd_leases ...
	I0816 11:02:23.108927    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 11:02:23.108948    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 11:02:23.108958    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 11:02:23.108973    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 11:02:23.108981    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 11:02:23.108994    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 11:02:23.109002    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 11:02:23.109008    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 11:02:23.109026    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 11:02:23.109040    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 11:02:23.109049    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 11:02:23.109062    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 11:02:23.109070    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 11:02:23.109076    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 11:02:23.109092    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 11:02:23.109105    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 11:02:23.109121    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 11:02:23.109131    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 11:02:25.110999    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Attempt 22
	I0816 11:02:25.111014    7719 main.go:141] libmachine: (docker-flags-585000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:02:25.111085    7719 main.go:141] libmachine: (docker-flags-585000) DBG | hyperkit pid from json: 8072
	I0816 11:02:25.111900    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Searching for f6:6:3f:b1:b2:c7 in /var/db/dhcpd_leases ...
	I0816 11:02:25.111957    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 11:02:25.111968    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 11:02:25.111978    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 11:02:25.111987    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 11:02:25.111994    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 11:02:25.112001    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 11:02:25.112009    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 11:02:25.112022    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 11:02:25.112035    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 11:02:25.112046    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 11:02:25.112057    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 11:02:25.112066    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 11:02:25.112076    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 11:02:25.112086    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 11:02:25.112095    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 11:02:25.112103    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 11:02:25.112113    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 11:02:25.112121    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 11:02:27.113163    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Attempt 23
	I0816 11:02:27.113174    7719 main.go:141] libmachine: (docker-flags-585000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:02:27.113231    7719 main.go:141] libmachine: (docker-flags-585000) DBG | hyperkit pid from json: 8072
	I0816 11:02:27.113997    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Searching for f6:6:3f:b1:b2:c7 in /var/db/dhcpd_leases ...
	I0816 11:02:27.114083    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 11:02:27.114095    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 11:02:27.114104    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 11:02:27.114113    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 11:02:27.114121    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 11:02:27.114129    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 11:02:27.114136    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 11:02:27.114144    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 11:02:27.114160    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 11:02:27.114170    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 11:02:27.114187    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 11:02:27.114195    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 11:02:27.114201    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 11:02:27.114208    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 11:02:27.114215    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 11:02:27.114220    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 11:02:27.114239    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 11:02:27.114253    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 11:02:29.114497    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Attempt 24
	I0816 11:02:29.114509    7719 main.go:141] libmachine: (docker-flags-585000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:02:29.114585    7719 main.go:141] libmachine: (docker-flags-585000) DBG | hyperkit pid from json: 8072
	I0816 11:02:29.115444    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Searching for f6:6:3f:b1:b2:c7 in /var/db/dhcpd_leases ...
	I0816 11:02:29.115472    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 11:02:29.115483    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 11:02:29.115499    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 11:02:29.115520    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 11:02:29.115532    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 11:02:29.115540    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 11:02:29.115548    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 11:02:29.115556    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 11:02:29.115562    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 11:02:29.115571    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 11:02:29.115577    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 11:02:29.115591    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 11:02:29.115598    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 11:02:29.115605    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 11:02:29.115618    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 11:02:29.115626    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 11:02:29.115633    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 11:02:29.115641    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 11:02:31.117628    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Attempt 25
	I0816 11:02:31.117644    7719 main.go:141] libmachine: (docker-flags-585000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:02:31.117692    7719 main.go:141] libmachine: (docker-flags-585000) DBG | hyperkit pid from json: 8072
	I0816 11:02:31.118519    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Searching for f6:6:3f:b1:b2:c7 in /var/db/dhcpd_leases ...
	I0816 11:02:31.118564    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 11:02:31.118575    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 11:02:31.118589    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 11:02:31.118596    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 11:02:31.118611    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 11:02:31.118619    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 11:02:31.118626    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 11:02:31.118637    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 11:02:31.118643    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 11:02:31.118650    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 11:02:31.118657    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 11:02:31.118666    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 11:02:31.118673    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 11:02:31.118678    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 11:02:31.118713    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 11:02:31.118725    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 11:02:31.118732    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 11:02:31.118740    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 11:02:33.119986    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Attempt 26
	I0816 11:02:33.120002    7719 main.go:141] libmachine: (docker-flags-585000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:02:33.120065    7719 main.go:141] libmachine: (docker-flags-585000) DBG | hyperkit pid from json: 8072
	I0816 11:02:33.120862    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Searching for f6:6:3f:b1:b2:c7 in /var/db/dhcpd_leases ...
	I0816 11:02:33.120910    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 11:02:33.120920    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 11:02:33.120940    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 11:02:33.120963    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 11:02:33.120974    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 11:02:33.120983    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 11:02:33.120989    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 11:02:33.120996    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 11:02:33.121002    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 11:02:33.121012    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 11:02:33.121024    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 11:02:33.121031    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 11:02:33.121043    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 11:02:33.121052    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 11:02:33.121059    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 11:02:33.121067    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 11:02:33.121074    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 11:02:33.121083    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 11:02:35.123039    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Attempt 27
	I0816 11:02:35.123052    7719 main.go:141] libmachine: (docker-flags-585000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:02:35.123134    7719 main.go:141] libmachine: (docker-flags-585000) DBG | hyperkit pid from json: 8072
	I0816 11:02:35.123965    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Searching for f6:6:3f:b1:b2:c7 in /var/db/dhcpd_leases ...
	I0816 11:02:35.124010    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 11:02:35.124021    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 11:02:35.124029    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 11:02:35.124042    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 11:02:35.124062    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 11:02:35.124073    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 11:02:35.124081    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 11:02:35.124087    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 11:02:35.124101    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 11:02:35.124111    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 11:02:35.124119    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 11:02:35.124126    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 11:02:35.124141    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 11:02:35.124154    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 11:02:35.124162    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 11:02:35.124170    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 11:02:35.124178    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 11:02:35.124183    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 11:02:37.126122    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Attempt 28
	I0816 11:02:37.126139    7719 main.go:141] libmachine: (docker-flags-585000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:02:37.126223    7719 main.go:141] libmachine: (docker-flags-585000) DBG | hyperkit pid from json: 8072
	I0816 11:02:37.127012    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Searching for f6:6:3f:b1:b2:c7 in /var/db/dhcpd_leases ...
	I0816 11:02:37.127059    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 11:02:37.127071    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 11:02:37.127088    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 11:02:37.127097    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 11:02:37.127104    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 11:02:37.127111    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 11:02:37.127129    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 11:02:37.127146    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 11:02:37.127165    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 11:02:37.127175    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 11:02:37.127193    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 11:02:37.127205    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 11:02:37.127213    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 11:02:37.127221    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 11:02:37.127235    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 11:02:37.127245    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 11:02:37.127253    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 11:02:37.127263    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 11:02:39.127598    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Attempt 29
	I0816 11:02:39.127612    7719 main.go:141] libmachine: (docker-flags-585000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:02:39.127689    7719 main.go:141] libmachine: (docker-flags-585000) DBG | hyperkit pid from json: 8072
	I0816 11:02:39.128469    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Searching for f6:6:3f:b1:b2:c7 in /var/db/dhcpd_leases ...
	I0816 11:02:39.128510    7719 main.go:141] libmachine: (docker-flags-585000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 11:02:39.128521    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 11:02:39.128535    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 11:02:39.128545    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 11:02:39.128559    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 11:02:39.128583    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 11:02:39.128591    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 11:02:39.128598    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 11:02:39.128607    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 11:02:39.128622    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 11:02:39.128634    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 11:02:39.128647    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 11:02:39.128655    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 11:02:39.128662    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 11:02:39.128676    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 11:02:39.128687    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 11:02:39.128696    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 11:02:39.128705    7719 main.go:141] libmachine: (docker-flags-585000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 11:02:41.130697    7719 client.go:171] duration metric: took 1m1.089061119s to LocalClient.Create
	I0816 11:02:43.131642    7719 start.go:128] duration metric: took 1m3.14320951s to createHost
	I0816 11:02:43.131653    7719 start.go:83] releasing machines lock for "docker-flags-585000", held for 1m3.143357573s
	W0816 11:02:43.131743    7719 out.go:270] * Failed to start hyperkit VM. Running "minikube delete -p docker-flags-585000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for f6:6:3f:b1:b2:c7
	* Failed to start hyperkit VM. Running "minikube delete -p docker-flags-585000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for f6:6:3f:b1:b2:c7
	I0816 11:02:43.194929    7719 out.go:201] 
	W0816 11:02:43.216160    7719 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for f6:6:3f:b1:b2:c7
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for f6:6:3f:b1:b2:c7
	W0816 11:02:43.216172    7719 out.go:270] * 
	* 
	W0816 11:02:43.216820    7719 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 11:02:43.279057    7719 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-amd64 start -p docker-flags-585000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-585000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p docker-flags-585000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 50 (176.109988ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node docker-flags-585000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-amd64 -p docker-flags-585000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 50
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-585000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p docker-flags-585000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 50 (172.695111ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node docker-flags-585000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-amd64 -p docker-flags-585000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 50
docker_test.go:73: expected "out/minikube-darwin-amd64 -p docker-flags-585000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "\n\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-08-16 11:02:43.742983 -0700 PDT m=+4506.406368004
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p docker-flags-585000 -n docker-flags-585000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p docker-flags-585000 -n docker-flags-585000: exit status 7 (77.865435ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 11:02:43.818868    8100 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0816 11:02:43.818889    8100 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-585000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "docker-flags-585000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-585000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-flags-585000: (5.24170799s)
--- FAIL: TestDockerFlags (252.26s)

                                                
                                    
x
+
TestForceSystemdFlag (252.12s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-576000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p force-systemd-flag-576000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit : exit status 80 (4m6.555836939s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-576000] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19461
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1276/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1276/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "force-systemd-flag-576000" primary control-plane node in "force-systemd-flag-576000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "force-systemd-flag-576000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 10:57:33.685378    7684 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:57:33.685555    7684 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:57:33.685561    7684 out.go:358] Setting ErrFile to fd 2...
	I0816 10:57:33.685564    7684 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:57:33.685751    7684 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1276/.minikube/bin
	I0816 10:57:33.687265    7684 out.go:352] Setting JSON to false
	I0816 10:57:33.710004    7684 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":5223,"bootTime":1723825830,"procs":443,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0816 10:57:33.710103    7684 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 10:57:33.733864    7684 out.go:177] * [force-systemd-flag-576000] minikube v1.33.1 on Darwin 14.6.1
	I0816 10:57:33.776017    7684 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 10:57:33.776026    7684 notify.go:220] Checking for updates...
	I0816 10:57:33.818929    7684 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1276/kubeconfig
	I0816 10:57:33.840076    7684 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0816 10:57:33.860838    7684 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 10:57:33.881922    7684 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1276/.minikube
	I0816 10:57:33.902972    7684 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 10:57:33.924408    7684 config.go:182] Loaded profile config "force-systemd-env-773000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:57:33.924508    7684 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 10:57:33.953041    7684 out.go:177] * Using the hyperkit driver based on user configuration
	I0816 10:57:33.994896    7684 start.go:297] selected driver: hyperkit
	I0816 10:57:33.994910    7684 start.go:901] validating driver "hyperkit" against <nil>
	I0816 10:57:33.994921    7684 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 10:57:33.997994    7684 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:57:33.998110    7684 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19461-1276/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0816 10:57:34.006643    7684 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0816 10:57:34.010578    7684 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:57:34.010602    7684 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0816 10:57:34.010634    7684 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 10:57:34.010855    7684 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0816 10:57:34.010883    7684 cni.go:84] Creating CNI manager for ""
	I0816 10:57:34.010899    7684 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0816 10:57:34.010905    7684 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0816 10:57:34.010973    7684 start.go:340] cluster config:
	{Name:force-systemd-flag-576000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-flag-576000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clus
ter.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 10:57:34.011061    7684 iso.go:125] acquiring lock: {Name:mkb430b43b5e7a8a683454482519837ed996c78d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:57:34.032169    7684 out.go:177] * Starting "force-systemd-flag-576000" primary control-plane node in "force-systemd-flag-576000" cluster
	I0816 10:57:34.073901    7684 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 10:57:34.073945    7684 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0816 10:57:34.073960    7684 cache.go:56] Caching tarball of preloaded images
	I0816 10:57:34.074075    7684 preload.go:172] Found /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0816 10:57:34.074085    7684 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 10:57:34.074163    7684 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/force-systemd-flag-576000/config.json ...
	I0816 10:57:34.074180    7684 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/force-systemd-flag-576000/config.json: {Name:mk43194cf5a925d68bcfcec1ae0a2ccf0ea54a2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:57:34.074489    7684 start.go:360] acquireMachinesLock for force-systemd-flag-576000: {Name:mk0510f0432b1e24fd07d899155e85dff8756d5a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:58:31.049481    7684 start.go:364] duration metric: took 56.976723104s to acquireMachinesLock for "force-systemd-flag-576000"
	I0816 10:58:31.049524    7684 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-576000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-flag-576000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:58:31.049580    7684 start.go:125] createHost starting for "" (driver="hyperkit")
	I0816 10:58:31.091740    7684 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0816 10:58:31.091900    7684 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:58:31.091956    7684 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:58:31.100913    7684 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53874
	I0816 10:58:31.101311    7684 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:58:31.101909    7684 main.go:141] libmachine: Using API Version  1
	I0816 10:58:31.101930    7684 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:58:31.102242    7684 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:58:31.102373    7684 main.go:141] libmachine: (force-systemd-flag-576000) Calling .GetMachineName
	I0816 10:58:31.102544    7684 main.go:141] libmachine: (force-systemd-flag-576000) Calling .DriverName
	I0816 10:58:31.102670    7684 start.go:159] libmachine.API.Create for "force-systemd-flag-576000" (driver="hyperkit")
	I0816 10:58:31.102697    7684 client.go:168] LocalClient.Create starting
	I0816 10:58:31.102731    7684 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem
	I0816 10:58:31.102781    7684 main.go:141] libmachine: Decoding PEM data...
	I0816 10:58:31.102797    7684 main.go:141] libmachine: Parsing certificate...
	I0816 10:58:31.102851    7684 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem
	I0816 10:58:31.102887    7684 main.go:141] libmachine: Decoding PEM data...
	I0816 10:58:31.102900    7684 main.go:141] libmachine: Parsing certificate...
	I0816 10:58:31.102913    7684 main.go:141] libmachine: Running pre-create checks...
	I0816 10:58:31.102922    7684 main.go:141] libmachine: (force-systemd-flag-576000) Calling .PreCreateCheck
	I0816 10:58:31.103004    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:58:31.103149    7684 main.go:141] libmachine: (force-systemd-flag-576000) Calling .GetConfigRaw
	I0816 10:58:31.112912    7684 main.go:141] libmachine: Creating machine...
	I0816 10:58:31.112920    7684 main.go:141] libmachine: (force-systemd-flag-576000) Calling .Create
	I0816 10:58:31.113028    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:58:31.113174    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | I0816 10:58:31.113015    7702 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19461-1276/.minikube
	I0816 10:58:31.113255    7684 main.go:141] libmachine: (force-systemd-flag-576000) Downloading /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19461-1276/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0816 10:58:31.534891    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | I0816 10:58:31.534811    7702 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-flag-576000/id_rsa...
	I0816 10:58:31.694287    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | I0816 10:58:31.694215    7702 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-flag-576000/force-systemd-flag-576000.rawdisk...
	I0816 10:58:31.694301    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Writing magic tar header
	I0816 10:58:31.694342    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Writing SSH key tar header
	I0816 10:58:31.715637    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | I0816 10:58:31.715603    7702 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-flag-576000 ...
	I0816 10:58:32.088629    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:58:32.088655    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-flag-576000/hyperkit.pid
	I0816 10:58:32.088720    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Using UUID 8f4a4ebc-4e7d-4cda-adb8-2aeb4f262c26
	I0816 10:58:32.113999    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Generated MAC 6e:45:a1:9a:8a:1e
	I0816 10:58:32.114020    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-576000
	I0816 10:58:32.114049    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | 2024/08/16 10:58:32 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-flag-576000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"8f4a4ebc-4e7d-4cda-adb8-2aeb4f262c26", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-flag-576000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-flag-576000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-flag-576000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:
[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 10:58:32.114077    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | 2024/08/16 10:58:32 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-flag-576000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"8f4a4ebc-4e7d-4cda-adb8-2aeb4f262c26", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-flag-576000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-flag-576000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-flag-576000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:
[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 10:58:32.114135    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | 2024/08/16 10:58:32 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-flag-576000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "8f4a4ebc-4e7d-4cda-adb8-2aeb4f262c26", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-flag-576000/force-systemd-flag-576000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-flag-576000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-flag-576000/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-flag-576000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/fo
rce-systemd-flag-576000/bzimage,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-flag-576000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-576000"}
	I0816 10:58:32.114173    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | 2024/08/16 10:58:32 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-flag-576000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 8f4a4ebc-4e7d-4cda-adb8-2aeb4f262c26 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-flag-576000/force-systemd-flag-576000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-flag-576000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-flag-576000/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-flag-576000/console-ring -f kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-flag-576000/bzimage,/Users/jenkins/minikube-integr
ation/19461-1276/.minikube/machines/force-systemd-flag-576000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-576000"
	I0816 10:58:32.114182    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | 2024/08/16 10:58:32 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0816 10:58:32.117279    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | 2024/08/16 10:58:32 DEBUG: hyperkit: Pid is 7717
	I0816 10:58:32.117737    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Attempt 0
	I0816 10:58:32.117761    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:58:32.117863    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | hyperkit pid from json: 7717
	I0816 10:58:32.119044    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Searching for 6e:45:a1:9a:8a:1e in /var/db/dhcpd_leases ...
	I0816 10:58:32.119142    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:58:32.119179    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:58:32.119197    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:58:32.119213    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:58:32.119224    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:58:32.119244    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:58:32.119260    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:58:32.119287    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:58:32.119305    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:58:32.119319    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:58:32.119334    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:58:32.119363    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:58:32.119377    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:58:32.119390    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:58:32.119404    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:58:32.119416    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:58:32.119429    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:58:32.119442    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:58:32.124776    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | 2024/08/16 10:58:32 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0816 10:58:32.132967    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | 2024/08/16 10:58:32 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-flag-576000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0816 10:58:32.133790    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | 2024/08/16 10:58:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:58:32.133811    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | 2024/08/16 10:58:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:58:32.133824    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | 2024/08/16 10:58:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:58:32.133835    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | 2024/08/16 10:58:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:58:32.506925    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | 2024/08/16 10:58:32 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0816 10:58:32.506945    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | 2024/08/16 10:58:32 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0816 10:58:32.621692    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | 2024/08/16 10:58:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:58:32.621711    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | 2024/08/16 10:58:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:58:32.621733    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | 2024/08/16 10:58:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:58:32.621748    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | 2024/08/16 10:58:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:58:32.622604    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | 2024/08/16 10:58:32 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0816 10:58:32.622616    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | 2024/08/16 10:58:32 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0816 10:58:34.120187    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Attempt 1
	I0816 10:58:34.120201    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:58:34.120251    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | hyperkit pid from json: 7717
	I0816 10:58:34.121052    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Searching for 6e:45:a1:9a:8a:1e in /var/db/dhcpd_leases ...
	I0816 10:58:34.121093    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:58:34.121103    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:58:34.121112    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:58:34.121123    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:58:34.121130    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:58:34.121136    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:58:34.121144    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:58:34.121150    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:58:34.121158    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:58:34.121166    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:58:34.121181    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:58:34.121195    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:58:34.121219    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:58:34.121233    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:58:34.121241    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:58:34.121247    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:58:34.121262    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:58:34.121275    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:58:36.121802    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Attempt 2
	I0816 10:58:36.121819    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:58:36.121896    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | hyperkit pid from json: 7717
	I0816 10:58:36.122828    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Searching for 6e:45:a1:9a:8a:1e in /var/db/dhcpd_leases ...
	I0816 10:58:36.122853    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:58:36.122863    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:58:36.122873    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:58:36.122879    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:58:36.122885    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:58:36.122891    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:58:36.122897    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:58:36.122902    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:58:36.122908    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:58:36.122914    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:58:36.122920    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:58:36.122927    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:58:36.122935    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:58:36.122949    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:58:36.122962    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:58:36.122971    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:58:36.122980    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:58:36.122989    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:58:37.980476    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | 2024/08/16 10:58:37 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0816 10:58:37.980690    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | 2024/08/16 10:58:37 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0816 10:58:37.980703    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | 2024/08/16 10:58:37 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0816 10:58:38.001362    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | 2024/08/16 10:58:38 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0816 10:58:38.125089    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Attempt 3
	I0816 10:58:38.125116    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:58:38.125299    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | hyperkit pid from json: 7717
	I0816 10:58:38.126757    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Searching for 6e:45:a1:9a:8a:1e in /var/db/dhcpd_leases ...
	I0816 10:58:38.126874    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:58:38.126893    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:58:38.126911    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:58:38.126923    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:58:38.126936    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:58:38.126949    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:58:38.126964    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:58:38.126981    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:58:38.127016    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:58:38.127054    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:58:38.127065    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:58:38.127085    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:58:38.127099    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:58:38.127114    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:58:38.127125    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:58:38.127136    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:58:38.127146    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:58:38.127156    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:58:40.127176    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Attempt 4
	I0816 10:58:40.127194    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:58:40.127298    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | hyperkit pid from json: 7717
	I0816 10:58:40.128084    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Searching for 6e:45:a1:9a:8a:1e in /var/db/dhcpd_leases ...
	I0816 10:58:40.128137    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:58:40.128148    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:58:40.128155    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:58:40.128171    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:58:40.128182    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:58:40.128192    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:58:40.128200    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:58:40.128212    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:58:40.128228    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:58:40.128235    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:58:40.128244    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:58:40.128249    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:58:40.128256    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:58:40.128262    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:58:40.128270    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:58:40.128277    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:58:40.128283    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:58:40.128291    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:58:42.130296    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Attempt 5
	I0816 10:58:42.130311    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:58:42.130363    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | hyperkit pid from json: 7717
	I0816 10:58:42.131164    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Searching for 6e:45:a1:9a:8a:1e in /var/db/dhcpd_leases ...
	I0816 10:58:42.131209    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:58:42.131218    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:58:42.131232    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:58:42.131240    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:58:42.131246    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:58:42.131253    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:58:42.131260    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:58:42.131266    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:58:42.131273    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:58:42.131280    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:58:42.131304    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:58:42.131333    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:58:42.131368    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:58:42.131374    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:58:42.131380    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:58:42.131388    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:58:42.131395    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:58:42.131402    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:58:44.131468    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Attempt 6
	I0816 10:58:44.131480    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:58:44.131557    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | hyperkit pid from json: 7717
	I0816 10:58:44.132359    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Searching for 6e:45:a1:9a:8a:1e in /var/db/dhcpd_leases ...
	I0816 10:58:44.132413    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:58:44.132425    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:58:44.132438    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:58:44.132448    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:58:44.132455    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:58:44.132461    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:58:44.132468    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:58:44.132476    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:58:44.132483    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:58:44.132489    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:58:44.132511    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:58:44.132519    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:58:44.132535    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:58:44.132546    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:58:44.132560    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:58:44.132568    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:58:44.132575    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:58:44.132583    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:58:46.133055    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Attempt 7
	I0816 10:58:46.133068    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:58:46.133128    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | hyperkit pid from json: 7717
	I0816 10:58:46.133991    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Searching for 6e:45:a1:9a:8a:1e in /var/db/dhcpd_leases ...
	I0816 10:58:46.134045    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:58:46.134055    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:58:46.134064    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:58:46.134076    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:58:46.134084    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:58:46.134090    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:58:46.134099    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:58:46.134106    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:58:46.134112    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:58:46.134121    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:58:46.134136    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:58:46.134147    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:58:46.134154    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:58:46.134162    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:58:46.134183    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:58:46.134192    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:58:46.134208    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:58:46.134222    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:58:48.136142    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Attempt 8
	I0816 10:58:48.136158    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:58:48.136219    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | hyperkit pid from json: 7717
	I0816 10:58:48.137030    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Searching for 6e:45:a1:9a:8a:1e in /var/db/dhcpd_leases ...
	I0816 10:58:48.137080    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:58:48.137090    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:58:48.137098    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:58:48.137105    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:58:48.137112    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:58:48.137119    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:58:48.137126    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:58:48.137131    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:58:48.137149    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:58:48.137155    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:58:48.137167    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:58:48.137177    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:58:48.137185    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:58:48.137193    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:58:48.137210    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:58:48.137222    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:58:48.137230    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:58:48.137238    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:58:50.138668    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Attempt 9
	I0816 10:58:50.138681    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:58:50.138822    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | hyperkit pid from json: 7717
	I0816 10:58:50.139793    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Searching for 6e:45:a1:9a:8a:1e in /var/db/dhcpd_leases ...
	I0816 10:58:50.139842    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:58:50.139850    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:58:50.139858    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:58:50.139864    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:58:50.139872    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:58:50.139877    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:58:50.139904    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:58:50.139918    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:58:50.139926    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:58:50.139935    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:58:50.139945    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:58:50.139952    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:58:50.139959    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:58:50.139967    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:58:50.139974    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:58:50.139982    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:58:50.139989    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:58:50.139997    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:58:52.140714    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Attempt 10
	I0816 10:58:52.140726    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:58:52.140789    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | hyperkit pid from json: 7717
	I0816 10:58:52.141635    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Searching for 6e:45:a1:9a:8a:1e in /var/db/dhcpd_leases ...
	I0816 10:58:52.141678    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:58:52.141689    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:58:52.141710    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:58:52.141721    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:58:52.141729    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:58:52.141738    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:58:52.141746    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:58:52.141753    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:58:52.141814    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:58:52.141833    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:58:52.141842    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:58:52.141850    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:58:52.141857    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:58:52.141865    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:58:52.141880    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:58:52.141893    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:58:52.141908    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:58:52.141930    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:58:54.142603    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Attempt 11
	I0816 10:58:54.142633    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:58:54.142726    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | hyperkit pid from json: 7717
	I0816 10:58:54.143761    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Searching for 6e:45:a1:9a:8a:1e in /var/db/dhcpd_leases ...
	I0816 10:58:54.143783    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:58:54.143796    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:58:54.143806    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:58:54.143812    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:58:54.143828    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:58:54.143837    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:58:54.143852    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:58:54.143864    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:58:54.143876    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:58:54.143885    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:58:54.143892    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:58:54.143901    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:58:54.143909    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:58:54.143917    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:58:54.143934    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:58:54.143948    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:58:54.143970    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:58:54.144005    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:58:56.144520    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Attempt 12
	I0816 10:58:56.144537    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:58:56.144584    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | hyperkit pid from json: 7717
	I0816 10:58:56.145359    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Searching for 6e:45:a1:9a:8a:1e in /var/db/dhcpd_leases ...
	I0816 10:58:56.145401    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:58:56.145412    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:58:56.145422    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:58:56.145428    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:58:56.145435    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:58:56.145440    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:58:56.145463    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:58:56.145477    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:58:56.145486    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:58:56.145495    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:58:56.145504    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:58:56.145512    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:58:56.145519    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:58:56.145544    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:58:56.145553    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:58:56.145560    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:58:56.145570    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:58:56.145578    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:58:58.147291    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Attempt 13
	I0816 10:58:58.147304    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:58:58.147351    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | hyperkit pid from json: 7717
	I0816 10:58:58.148123    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Searching for 6e:45:a1:9a:8a:1e in /var/db/dhcpd_leases ...
	I0816 10:58:58.148166    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:58:58.148176    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:58:58.148186    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:58:58.148196    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:58:58.148202    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:58:58.148214    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:58:58.148223    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:58:58.148236    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:58:58.148244    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:58:58.148253    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:58:58.148261    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:58:58.148279    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:58:58.148294    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:58:58.148302    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:58:58.148310    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:58:58.148318    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:58:58.148326    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:58:58.148336    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:59:00.149536    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Attempt 14
	I0816 10:59:00.149551    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:59:00.149673    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | hyperkit pid from json: 7717
	I0816 10:59:00.150562    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Searching for 6e:45:a1:9a:8a:1e in /var/db/dhcpd_leases ...
	I0816 10:59:00.150639    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:59:00.150652    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:59:00.150659    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:59:00.150665    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:59:00.150671    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:59:00.150696    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:59:00.150713    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:59:00.150726    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:59:00.150752    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:59:00.150772    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:59:00.150783    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:59:00.150790    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:59:00.150798    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:59:00.150804    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:59:00.150812    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:59:00.150832    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:59:00.150881    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:59:00.150891    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:59:02.151593    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Attempt 15
	I0816 10:59:02.151610    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:59:02.151682    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | hyperkit pid from json: 7717
	I0816 10:59:02.152639    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Searching for 6e:45:a1:9a:8a:1e in /var/db/dhcpd_leases ...
	I0816 10:59:02.152695    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:59:02.152708    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:59:02.152728    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:59:02.152738    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:59:02.152748    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:59:02.152755    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:59:02.152763    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:59:02.152771    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:59:02.152779    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:59:02.152786    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:59:02.152794    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:59:02.152801    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:59:02.152816    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:59:02.152824    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:59:02.152850    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:59:02.152861    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:59:02.152871    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:59:02.152879    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:59:04.154797    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Attempt 16
	I0816 10:59:04.154811    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:59:04.154950    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | hyperkit pid from json: 7717
	I0816 10:59:04.155743    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Searching for 6e:45:a1:9a:8a:1e in /var/db/dhcpd_leases ...
	I0816 10:59:04.155793    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:59:04.155805    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:59:04.155815    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:59:04.155822    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:59:04.155829    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:59:04.155835    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:59:04.155846    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:59:04.155854    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:59:04.155861    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:59:04.155868    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:59:04.155874    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:59:04.155891    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:59:04.155914    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:59:04.155932    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:59:04.155944    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:59:04.155951    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:59:04.155964    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:59:04.155973    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:59:06.156941    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Attempt 17
	I0816 10:59:06.156956    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:59:06.156991    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | hyperkit pid from json: 7717
	I0816 10:59:06.157824    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Searching for 6e:45:a1:9a:8a:1e in /var/db/dhcpd_leases ...
	I0816 10:59:06.157863    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:59:06.157875    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:59:06.157890    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:59:06.157897    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:59:06.157903    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:59:06.157913    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:59:06.157921    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:59:06.157930    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:59:06.157937    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:59:06.157946    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:59:06.157954    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:59:06.157961    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:59:06.157976    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:59:06.157988    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:59:06.158007    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:59:06.158019    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:59:06.158027    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:59:06.158035    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:59:08.158175    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Attempt 18
	I0816 10:59:08.158191    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:59:08.158243    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | hyperkit pid from json: 7717
	I0816 10:59:08.159036    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Searching for 6e:45:a1:9a:8a:1e in /var/db/dhcpd_leases ...
	I0816 10:59:08.159087    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:59:08.159098    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:59:08.159106    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:59:08.159112    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:59:08.159118    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:59:08.159125    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:59:08.159133    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:59:08.159141    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:59:08.159147    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:59:08.159169    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:59:08.159184    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:59:08.159204    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:59:08.159211    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:59:08.159219    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:59:08.159226    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:59:08.159239    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:59:08.159255    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:59:08.159265    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:59:10.160470    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Attempt 19
	I0816 10:59:10.160483    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:59:10.160556    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | hyperkit pid from json: 7717
	I0816 10:59:10.161350    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Searching for 6e:45:a1:9a:8a:1e in /var/db/dhcpd_leases ...
	I0816 10:59:10.161394    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:59:10.161409    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:59:10.161430    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:59:10.161441    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:59:10.161458    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:59:10.161469    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:59:10.161476    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:59:10.161485    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:59:10.161492    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:59:10.161498    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:59:10.161507    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:59:10.161518    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:59:10.161527    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:59:10.161533    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:59:10.161541    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:59:10.161550    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:59:10.161556    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:59:10.161571    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:59:12.163131    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Attempt 20
	I0816 10:59:12.163147    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:59:12.163204    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | hyperkit pid from json: 7717
	I0816 10:59:12.163979    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Searching for 6e:45:a1:9a:8a:1e in /var/db/dhcpd_leases ...
	I0816 10:59:12.164018    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:59:12.164031    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:59:12.164054    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:59:12.164069    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:59:12.164096    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:59:12.164106    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:59:12.164113    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:59:12.164134    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:59:12.164140    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:59:12.164147    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:59:12.164155    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:59:12.164171    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:59:12.164196    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:59:12.164209    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:59:12.164221    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:59:12.164230    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:59:12.164239    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:59:12.164246    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:59:14.166169    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Attempt 21
	I0816 10:59:14.166185    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:59:14.166227    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | hyperkit pid from json: 7717
	I0816 10:59:14.167007    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Searching for 6e:45:a1:9a:8a:1e in /var/db/dhcpd_leases ...
	I0816 10:59:14.167050    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:59:14.167060    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:59:14.167083    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:59:14.167092    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:59:14.167104    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:59:14.167115    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:59:14.167122    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:59:14.167129    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:59:14.167144    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:59:14.167150    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:59:14.167156    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:59:14.167164    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:59:14.167172    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:59:14.167181    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:59:14.167210    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:59:14.167219    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:59:14.167225    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:59:14.167233    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:59:16.169168    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Attempt 22
	I0816 10:59:16.169182    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:59:16.169224    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | hyperkit pid from json: 7717
	I0816 10:59:16.170046    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Searching for 6e:45:a1:9a:8a:1e in /var/db/dhcpd_leases ...
	I0816 10:59:16.170090    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:59:16.170102    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:59:16.170122    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:59:16.170135    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:59:16.170147    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:59:16.170159    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:59:16.170169    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:59:16.170176    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:59:16.170185    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:59:16.170192    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:59:16.170205    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:59:16.170229    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:59:16.170261    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:59:16.170278    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:59:16.170287    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:59:16.170295    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:59:16.170303    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:59:16.170311    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:59:18.172201    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Attempt 23
	I0816 10:59:18.172217    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:59:18.172324    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | hyperkit pid from json: 7717
	I0816 10:59:18.173103    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Searching for 6e:45:a1:9a:8a:1e in /var/db/dhcpd_leases ...
	I0816 10:59:18.173148    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:59:18.173161    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:59:18.173169    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:59:18.173178    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:59:18.173206    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:59:18.173214    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:59:18.173221    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:59:18.173231    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:59:18.173250    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:59:18.173263    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:59:18.173272    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:59:18.173282    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:59:18.173290    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:59:18.173297    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:59:18.173310    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:59:18.173319    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:59:18.173326    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:59:18.173334    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:59:20.175258    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Attempt 24
	I0816 10:59:20.175271    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:59:20.175342    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | hyperkit pid from json: 7717
	I0816 10:59:20.176356    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Searching for 6e:45:a1:9a:8a:1e in /var/db/dhcpd_leases ...
	I0816 10:59:20.176393    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:59:20.176404    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:59:20.176417    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:59:20.176427    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:59:20.176441    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:59:20.176450    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:59:20.176457    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:59:20.176463    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:59:20.176473    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:59:20.176487    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:59:20.176500    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:59:20.176508    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:59:20.176516    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:59:20.176531    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:59:20.176540    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:59:20.176547    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:59:20.176556    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:59:20.176565    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:59:22.177080    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Attempt 25
	I0816 10:59:22.177106    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:59:22.177171    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | hyperkit pid from json: 7717
	I0816 10:59:22.177944    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Searching for 6e:45:a1:9a:8a:1e in /var/db/dhcpd_leases ...
	I0816 10:59:22.177998    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:59:22.178011    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:59:22.178020    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:59:22.178028    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:59:22.178042    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:59:22.178050    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:59:22.178057    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:59:22.178063    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:59:22.178083    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:59:22.178096    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:59:22.178104    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:59:22.178112    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:59:22.178119    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:59:22.178126    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:59:22.178134    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:59:22.178141    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:59:22.178147    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:59:22.178155    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:59:24.180127    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Attempt 26
	I0816 10:59:24.180142    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:59:24.180200    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | hyperkit pid from json: 7717
	I0816 10:59:24.181068    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Searching for 6e:45:a1:9a:8a:1e in /var/db/dhcpd_leases ...
	I0816 10:59:24.181116    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:59:24.181125    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:59:24.181136    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:59:24.181143    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:59:24.181151    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:59:24.181159    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:59:24.181165    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:59:24.181171    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:59:24.181181    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:59:24.181189    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:59:24.181200    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:59:24.181208    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:59:24.181216    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:59:24.181222    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:59:24.181243    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:59:24.181255    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:59:24.181263    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:59:24.181276    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:59:26.183297    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Attempt 27
	I0816 10:59:26.183308    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:59:26.183367    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | hyperkit pid from json: 7717
	I0816 10:59:26.184138    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Searching for 6e:45:a1:9a:8a:1e in /var/db/dhcpd_leases ...
	I0816 10:59:26.184199    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:59:26.184211    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:59:26.184223    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:59:26.184231    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:59:26.184239    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:59:26.184246    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:59:26.184253    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:59:26.184261    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:59:26.184267    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:59:26.184277    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:59:26.184284    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:59:26.184292    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:59:26.184322    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:59:26.184339    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:59:26.184352    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:59:26.184367    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:59:26.184375    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:59:26.184384    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:59:28.184601    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Attempt 28
	I0816 10:59:28.184613    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:59:28.184687    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | hyperkit pid from json: 7717
	I0816 10:59:28.185451    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Searching for 6e:45:a1:9a:8a:1e in /var/db/dhcpd_leases ...
	I0816 10:59:28.185497    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:59:28.185505    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:59:28.185545    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:59:28.185557    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:59:28.185568    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:59:28.185577    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:59:28.185584    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:59:28.185598    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:59:28.185609    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:59:28.185617    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:59:28.185624    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:59:28.185633    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:59:28.185640    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:59:28.185648    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:59:28.185655    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:59:28.185661    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:59:28.185668    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:59:28.185677    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:59:30.186555    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Attempt 29
	I0816 10:59:30.186568    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:59:30.186640    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | hyperkit pid from json: 7717
	I0816 10:59:30.187434    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Searching for 6e:45:a1:9a:8a:1e in /var/db/dhcpd_leases ...
	I0816 10:59:30.187489    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:59:30.187501    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:59:30.187513    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:59:30.187520    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:59:30.187528    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:59:30.187535    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:59:30.187542    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:59:30.187548    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:59:30.187554    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:59:30.187563    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:59:30.187574    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:59:30.187585    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:59:30.187594    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:59:30.187602    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:59:30.187609    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:59:30.187621    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:59:30.187630    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:59:30.187639    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:59:32.188803    7684 client.go:171] duration metric: took 1m1.08796722s to LocalClient.Create
	I0816 10:59:34.189607    7684 start.go:128] duration metric: took 1m3.141947162s to createHost
	I0816 10:59:34.189621    7684 start.go:83] releasing machines lock for "force-systemd-flag-576000", held for 1m3.142062791s
	W0816 10:59:34.189635    7684 start.go:714] error starting host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 6e:45:a1:9a:8a:1e
	I0816 10:59:34.189940    7684 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:59:34.189970    7684 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:59:34.199037    7684 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53890
	I0816 10:59:34.199532    7684 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:59:34.200066    7684 main.go:141] libmachine: Using API Version  1
	I0816 10:59:34.200099    7684 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:59:34.200381    7684 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:59:34.200749    7684 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:59:34.200801    7684 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:59:34.209545    7684 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53892
	I0816 10:59:34.210089    7684 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:59:34.210539    7684 main.go:141] libmachine: Using API Version  1
	I0816 10:59:34.210554    7684 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:59:34.210801    7684 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:59:34.210949    7684 main.go:141] libmachine: (force-systemd-flag-576000) Calling .GetState
	I0816 10:59:34.211054    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:59:34.211124    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | hyperkit pid from json: 7717
	I0816 10:59:34.212151    7684 main.go:141] libmachine: (force-systemd-flag-576000) Calling .DriverName
	I0816 10:59:34.233027    7684 out.go:177] * Deleting "force-systemd-flag-576000" in hyperkit ...
	I0816 10:59:34.275064    7684 main.go:141] libmachine: (force-systemd-flag-576000) Calling .Remove
	I0816 10:59:34.275203    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:59:34.275213    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:59:34.275286    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | hyperkit pid from json: 7717
	I0816 10:59:34.276212    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:59:34.276275    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | waiting for graceful shutdown
	I0816 10:59:35.278364    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:59:35.278491    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | hyperkit pid from json: 7717
	I0816 10:59:35.279389    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | waiting for graceful shutdown
	I0816 10:59:36.279517    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:59:36.279641    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | hyperkit pid from json: 7717
	I0816 10:59:36.281295    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | waiting for graceful shutdown
	I0816 10:59:37.282240    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:59:37.282329    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | hyperkit pid from json: 7717
	I0816 10:59:37.282950    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | waiting for graceful shutdown
	I0816 10:59:38.283174    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:59:38.283251    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | hyperkit pid from json: 7717
	I0816 10:59:38.283801    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | waiting for graceful shutdown
	I0816 10:59:39.284674    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:59:39.284751    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | hyperkit pid from json: 7717
	I0816 10:59:39.285768    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | sending sigkill
	I0816 10:59:39.285776    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	W0816 10:59:39.298116    7684 out.go:270] ! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 6e:45:a1:9a:8a:1e
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 6e:45:a1:9a:8a:1e
	I0816 10:59:39.298130    7684 start.go:729] Will try again in 5 seconds ...
	I0816 10:59:39.307607    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | 2024/08/16 10:59:39 WARN : hyperkit: failed to read stderr: EOF
	I0816 10:59:39.307635    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | 2024/08/16 10:59:39 WARN : hyperkit: failed to read stdout: EOF
	I0816 10:59:44.300061    7684 start.go:360] acquireMachinesLock for force-systemd-flag-576000: {Name:mk0510f0432b1e24fd07d899155e85dff8756d5a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 11:00:37.044036    7684 start.go:364] duration metric: took 52.745540603s to acquireMachinesLock for "force-systemd-flag-576000"
	I0816 11:00:37.044076    7684 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-576000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-flag-576000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 11:00:37.044130    7684 start.go:125] createHost starting for "" (driver="hyperkit")
	I0816 11:00:37.065602    7684 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0816 11:00:37.065670    7684 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 11:00:37.065707    7684 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 11:00:37.074216    7684 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53902
	I0816 11:00:37.074609    7684 main.go:141] libmachine: () Calling .GetVersion
	I0816 11:00:37.075034    7684 main.go:141] libmachine: Using API Version  1
	I0816 11:00:37.075050    7684 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 11:00:37.075346    7684 main.go:141] libmachine: () Calling .GetMachineName
	I0816 11:00:37.075461    7684 main.go:141] libmachine: (force-systemd-flag-576000) Calling .GetMachineName
	I0816 11:00:37.075560    7684 main.go:141] libmachine: (force-systemd-flag-576000) Calling .DriverName
	I0816 11:00:37.075680    7684 start.go:159] libmachine.API.Create for "force-systemd-flag-576000" (driver="hyperkit")
	I0816 11:00:37.075708    7684 client.go:168] LocalClient.Create starting
	I0816 11:00:37.075741    7684 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem
	I0816 11:00:37.075789    7684 main.go:141] libmachine: Decoding PEM data...
	I0816 11:00:37.075800    7684 main.go:141] libmachine: Parsing certificate...
	I0816 11:00:37.075853    7684 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem
	I0816 11:00:37.075891    7684 main.go:141] libmachine: Decoding PEM data...
	I0816 11:00:37.075902    7684 main.go:141] libmachine: Parsing certificate...
	I0816 11:00:37.075914    7684 main.go:141] libmachine: Running pre-create checks...
	I0816 11:00:37.075919    7684 main.go:141] libmachine: (force-systemd-flag-576000) Calling .PreCreateCheck
	I0816 11:00:37.076005    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:00:37.076029    7684 main.go:141] libmachine: (force-systemd-flag-576000) Calling .GetConfigRaw
	I0816 11:00:37.108488    7684 main.go:141] libmachine: Creating machine...
	I0816 11:00:37.108500    7684 main.go:141] libmachine: (force-systemd-flag-576000) Calling .Create
	I0816 11:00:37.108615    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:00:37.108742    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | I0816 11:00:37.108606    8044 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19461-1276/.minikube
	I0816 11:00:37.108784    7684 main.go:141] libmachine: (force-systemd-flag-576000) Downloading /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19461-1276/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0816 11:00:37.318159    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | I0816 11:00:37.318050    8044 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-flag-576000/id_rsa...
	I0816 11:00:37.417950    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | I0816 11:00:37.417882    8044 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-flag-576000/force-systemd-flag-576000.rawdisk...
	I0816 11:00:37.417959    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Writing magic tar header
	I0816 11:00:37.417979    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Writing SSH key tar header
	I0816 11:00:37.418621    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | I0816 11:00:37.418585    8044 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-flag-576000 ...
	I0816 11:00:37.889400    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:00:37.889422    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-flag-576000/hyperkit.pid
	I0816 11:00:37.889463    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Using UUID 7c243a70-08f4-4275-bac3-0451437ab4af
	I0816 11:00:37.915657    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Generated MAC f2:78:42:8:64:71
	I0816 11:00:37.915681    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-576000
	I0816 11:00:37.915723    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | 2024/08/16 11:00:37 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-flag-576000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"7c243a70-08f4-4275-bac3-0451437ab4af", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-flag-576000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-flag-576000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-flag-576000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:
[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 11:00:37.915753    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | 2024/08/16 11:00:37 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-flag-576000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"7c243a70-08f4-4275-bac3-0451437ab4af", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-flag-576000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-flag-576000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-flag-576000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:
[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 11:00:37.915811    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | 2024/08/16 11:00:37 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-flag-576000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "7c243a70-08f4-4275-bac3-0451437ab4af", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-flag-576000/force-systemd-flag-576000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-flag-576000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-flag-576000/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-flag-576000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/fo
rce-systemd-flag-576000/bzimage,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-flag-576000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-576000"}
	I0816 11:00:37.915856    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | 2024/08/16 11:00:37 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-flag-576000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 7c243a70-08f4-4275-bac3-0451437ab4af -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-flag-576000/force-systemd-flag-576000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-flag-576000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-flag-576000/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-flag-576000/console-ring -f kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-flag-576000/bzimage,/Users/jenkins/minikube-integr
ation/19461-1276/.minikube/machines/force-systemd-flag-576000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-576000"
	I0816 11:00:37.915870    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | 2024/08/16 11:00:37 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0816 11:00:37.918951    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | 2024/08/16 11:00:37 DEBUG: hyperkit: Pid is 8045
	I0816 11:00:37.919955    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Attempt 0
	I0816 11:00:37.919977    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:00:37.920079    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | hyperkit pid from json: 8045
	I0816 11:00:37.921003    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Searching for f2:78:42:8:64:71 in /var/db/dhcpd_leases ...
	I0816 11:00:37.921074    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 11:00:37.921097    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 11:00:37.921140    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 11:00:37.921160    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 11:00:37.921177    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 11:00:37.921199    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 11:00:37.921217    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 11:00:37.921229    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 11:00:37.921239    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 11:00:37.921247    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 11:00:37.921260    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 11:00:37.921267    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 11:00:37.921273    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 11:00:37.921281    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 11:00:37.921291    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 11:00:37.921299    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 11:00:37.921307    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 11:00:37.921316    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 11:00:37.926819    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | 2024/08/16 11:00:37 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0816 11:00:37.934909    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | 2024/08/16 11:00:37 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-flag-576000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0816 11:00:37.935927    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | 2024/08/16 11:00:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 11:00:37.935950    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | 2024/08/16 11:00:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 11:00:37.935957    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | 2024/08/16 11:00:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 11:00:37.935963    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | 2024/08/16 11:00:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 11:00:38.309973    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | 2024/08/16 11:00:38 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0816 11:00:38.309989    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | 2024/08/16 11:00:38 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0816 11:00:38.424762    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | 2024/08/16 11:00:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 11:00:38.424779    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | 2024/08/16 11:00:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 11:00:38.424815    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | 2024/08/16 11:00:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 11:00:38.424839    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | 2024/08/16 11:00:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 11:00:38.425665    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | 2024/08/16 11:00:38 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0816 11:00:38.425676    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | 2024/08/16 11:00:38 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0816 11:00:39.921857    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Attempt 1
	I0816 11:00:39.921874    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:00:39.921986    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | hyperkit pid from json: 8045
	I0816 11:00:39.922782    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Searching for f2:78:42:8:64:71 in /var/db/dhcpd_leases ...
	I0816 11:00:39.922866    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 11:00:39.922879    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 11:00:39.922887    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 11:00:39.922914    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 11:00:39.922926    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 11:00:39.922944    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 11:00:39.922955    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 11:00:39.922963    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 11:00:39.922978    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 11:00:39.922988    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 11:00:39.922995    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 11:00:39.923003    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 11:00:39.923011    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 11:00:39.923018    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 11:00:39.923030    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 11:00:39.923041    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 11:00:39.923049    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 11:00:39.923063    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 11:00:41.924458    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Attempt 2
	I0816 11:00:41.924474    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:00:41.924567    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | hyperkit pid from json: 8045
	I0816 11:00:41.925337    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Searching for f2:78:42:8:64:71 in /var/db/dhcpd_leases ...
	I0816 11:00:41.925396    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 11:00:41.925405    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 11:00:41.925435    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 11:00:41.925449    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 11:00:41.925463    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 11:00:41.925472    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 11:00:41.925487    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 11:00:41.925498    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 11:00:41.925507    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 11:00:41.925515    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 11:00:41.925523    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 11:00:41.925531    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 11:00:41.925539    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 11:00:41.925547    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 11:00:41.925554    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 11:00:41.925560    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 11:00:41.925569    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 11:00:41.925577    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 11:00:43.768116    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | 2024/08/16 11:00:43 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0816 11:00:43.768327    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | 2024/08/16 11:00:43 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0816 11:00:43.768346    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | 2024/08/16 11:00:43 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0816 11:00:43.788786    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | 2024/08/16 11:00:43 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0816 11:00:43.926092    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Attempt 3
	I0816 11:00:43.926115    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:00:43.926287    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | hyperkit pid from json: 8045
	I0816 11:00:43.927742    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Searching for f2:78:42:8:64:71 in /var/db/dhcpd_leases ...
	I0816 11:00:43.927855    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 11:00:43.927877    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 11:00:43.927917    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 11:00:43.927937    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 11:00:43.927965    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 11:00:43.927977    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 11:00:43.928018    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 11:00:43.928037    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 11:00:43.928055    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 11:00:43.928068    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 11:00:43.928089    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 11:00:43.928106    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 11:00:43.928121    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 11:00:43.928132    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 11:00:43.928143    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 11:00:43.928154    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 11:00:43.928164    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 11:00:43.928176    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 11:00:45.928812    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Attempt 4
	I0816 11:00:45.928827    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:00:45.928935    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | hyperkit pid from json: 8045
	I0816 11:00:45.929752    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Searching for f2:78:42:8:64:71 in /var/db/dhcpd_leases ...
	I0816 11:00:45.929806    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 11:00:45.929819    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 11:00:45.929835    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 11:00:45.929854    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 11:00:45.929864    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 11:00:45.929885    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 11:00:45.929900    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 11:00:45.929914    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 11:00:45.929928    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 11:00:45.929938    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 11:00:45.929946    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 11:00:45.929954    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 11:00:45.929967    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 11:00:45.929975    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 11:00:45.929983    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 11:00:45.929991    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 11:00:45.929998    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 11:00:45.930011    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 11:00:47.930136    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Attempt 5
	I0816 11:00:47.930151    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:00:47.930203    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | hyperkit pid from json: 8045
	I0816 11:00:47.931141    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Searching for f2:78:42:8:64:71 in /var/db/dhcpd_leases ...
	I0816 11:00:47.931201    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 11:00:47.931212    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 11:00:47.931221    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 11:00:47.931228    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 11:00:47.931235    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 11:00:47.931242    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 11:00:47.931256    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 11:00:47.931267    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 11:00:47.931275    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 11:00:47.931284    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 11:00:47.931291    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 11:00:47.931303    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 11:00:47.931310    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 11:00:47.931327    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 11:00:47.931336    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 11:00:47.931347    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 11:00:47.931356    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 11:00:47.931364    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 11:00:49.931929    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Attempt 6
	I0816 11:00:49.931945    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:00:49.932023    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | hyperkit pid from json: 8045
	I0816 11:00:49.932897    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Searching for f2:78:42:8:64:71 in /var/db/dhcpd_leases ...
	I0816 11:00:49.932945    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 11:00:49.932956    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 11:00:49.932964    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 11:00:49.932972    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 11:00:49.932987    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 11:00:49.932993    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 11:00:49.933001    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 11:00:49.933007    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 11:00:49.933022    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 11:00:49.933036    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 11:00:49.933047    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 11:00:49.933056    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 11:00:49.933063    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 11:00:49.933072    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 11:00:49.933087    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 11:00:49.933100    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 11:00:49.933109    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 11:00:49.933115    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 11:00:51.933614    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Attempt 7
	I0816 11:00:51.933627    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:00:51.933683    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | hyperkit pid from json: 8045
	I0816 11:00:51.934540    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Searching for f2:78:42:8:64:71 in /var/db/dhcpd_leases ...
	I0816 11:00:51.934579    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 11:00:51.934594    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 11:00:51.934611    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 11:00:51.934625    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 11:00:51.934639    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 11:00:51.934652    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 11:00:51.934659    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 11:00:51.934668    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 11:00:51.934675    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 11:00:51.934683    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 11:00:51.934690    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 11:00:51.934698    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 11:00:51.934720    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 11:00:51.934732    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 11:00:51.934743    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 11:00:51.934749    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 11:00:51.934755    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 11:00:51.934764    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 11:00:53.934760    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Attempt 8
	I0816 11:00:53.934774    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:00:53.934865    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | hyperkit pid from json: 8045
	I0816 11:00:53.935685    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Searching for f2:78:42:8:64:71 in /var/db/dhcpd_leases ...
	I0816 11:00:53.935739    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 11:00:53.935750    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 11:00:53.935762    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 11:00:53.935776    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 11:00:53.935785    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 11:00:53.935792    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 11:00:53.935810    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 11:00:53.935823    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 11:00:53.935833    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 11:00:53.935840    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 11:00:53.935864    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 11:00:53.935880    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 11:00:53.935892    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 11:00:53.935898    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 11:00:53.935905    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 11:00:53.935914    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 11:00:53.935921    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 11:00:53.935929    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 11:00:55.936572    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Attempt 9
	I0816 11:00:55.936585    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:00:55.936689    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | hyperkit pid from json: 8045
	I0816 11:00:55.937702    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Searching for f2:78:42:8:64:71 in /var/db/dhcpd_leases ...
	I0816 11:00:55.937749    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 11:00:55.937759    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 11:00:55.937793    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 11:00:55.937810    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 11:00:55.937823    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 11:00:55.937841    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 11:00:55.937850    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 11:00:55.937857    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 11:00:55.937865    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 11:00:55.937872    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 11:00:55.937880    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 11:00:55.937887    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 11:00:55.937898    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 11:00:55.937906    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 11:00:55.937914    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 11:00:55.937921    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 11:00:55.937929    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 11:00:55.937946    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 11:00:57.939836    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Attempt 10
	I0816 11:00:57.939848    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:00:57.939913    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | hyperkit pid from json: 8045
	I0816 11:00:57.940959    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Searching for f2:78:42:8:64:71 in /var/db/dhcpd_leases ...
	I0816 11:00:57.940999    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 11:00:57.941009    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 11:00:57.941025    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 11:00:57.941036    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 11:00:57.941045    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 11:00:57.941057    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 11:00:57.941066    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 11:00:57.941074    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 11:00:57.941093    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 11:00:57.941105    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 11:00:57.941116    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 11:00:57.941123    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 11:00:57.941133    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 11:00:57.941147    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 11:00:57.941154    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 11:00:57.941162    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 11:00:57.941169    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 11:00:57.941175    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 11:00:59.941603    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Attempt 11
	I0816 11:00:59.941615    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:00:59.941743    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | hyperkit pid from json: 8045
	I0816 11:00:59.942527    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Searching for f2:78:42:8:64:71 in /var/db/dhcpd_leases ...
	I0816 11:00:59.942574    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 11:00:59.942592    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 11:00:59.942609    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 11:00:59.942618    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 11:00:59.942628    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 11:00:59.942638    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 11:00:59.942646    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 11:00:59.942652    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 11:00:59.942668    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 11:00:59.942680    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 11:00:59.942688    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 11:00:59.942697    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 11:00:59.942704    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 11:00:59.942713    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 11:00:59.942724    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 11:00:59.942733    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 11:00:59.942742    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 11:00:59.942750    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 11:01:01.944689    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Attempt 12
	I0816 11:01:01.944702    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:01:01.944770    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | hyperkit pid from json: 8045
	I0816 11:01:01.945821    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Searching for f2:78:42:8:64:71 in /var/db/dhcpd_leases ...
	I0816 11:01:01.945862    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 11:01:01.945872    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 11:01:01.945884    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 11:01:01.945891    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 11:01:01.945905    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 11:01:01.945917    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 11:01:01.945925    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 11:01:01.945933    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 11:01:01.945941    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 11:01:01.945948    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 11:01:01.945964    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 11:01:01.945975    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 11:01:01.945983    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 11:01:01.945991    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 11:01:01.946004    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 11:01:01.946013    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 11:01:01.946021    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 11:01:01.946029    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 11:01:03.946163    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Attempt 13
	I0816 11:01:03.946177    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:01:03.946249    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | hyperkit pid from json: 8045
	I0816 11:01:03.947105    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Searching for f2:78:42:8:64:71 in /var/db/dhcpd_leases ...
	I0816 11:01:03.947159    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 11:01:03.947178    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 11:01:03.947199    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 11:01:03.947210    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 11:01:03.947219    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 11:01:03.947229    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 11:01:03.947237    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 11:01:03.947245    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 11:01:03.947252    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 11:01:03.947259    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 11:01:03.947271    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 11:01:03.947281    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 11:01:03.947291    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 11:01:03.947310    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 11:01:03.947317    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 11:01:03.947326    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 11:01:03.947332    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 11:01:03.947348    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 11:01:05.947388    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Attempt 14
	I0816 11:01:05.947405    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:01:05.947489    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | hyperkit pid from json: 8045
	I0816 11:01:05.948342    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Searching for f2:78:42:8:64:71 in /var/db/dhcpd_leases ...
	I0816 11:01:05.948380    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 11:01:05.948391    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 11:01:05.948400    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 11:01:05.948408    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 11:01:05.948416    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 11:01:05.948422    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 11:01:05.948434    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 11:01:05.948440    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 11:01:05.948447    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 11:01:05.948453    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 11:01:05.948460    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 11:01:05.948487    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 11:01:05.948504    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 11:01:05.948516    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 11:01:05.948524    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 11:01:05.948532    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 11:01:05.948539    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 11:01:05.948553    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 11:01:07.949333    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Attempt 15
	I0816 11:01:07.949349    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:01:07.949397    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | hyperkit pid from json: 8045
	I0816 11:01:07.950390    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Searching for f2:78:42:8:64:71 in /var/db/dhcpd_leases ...
	I0816 11:01:07.950454    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 11:01:07.950472    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 11:01:07.950481    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 11:01:07.950490    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 11:01:07.950499    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 11:01:07.950508    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 11:01:07.950518    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 11:01:07.950526    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 11:01:07.950535    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 11:01:07.950543    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 11:01:07.950553    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 11:01:07.950562    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 11:01:07.950569    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 11:01:07.950576    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 11:01:07.950584    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 11:01:07.950594    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 11:01:07.950602    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 11:01:07.950608    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 11:01:09.952562    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Attempt 16
	I0816 11:01:09.952576    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:01:09.952633    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | hyperkit pid from json: 8045
	I0816 11:01:09.953528    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Searching for f2:78:42:8:64:71 in /var/db/dhcpd_leases ...
	I0816 11:01:09.953555    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 11:01:09.953564    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 11:01:09.953573    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 11:01:09.953588    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 11:01:09.953595    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 11:01:09.953617    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 11:01:09.953632    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 11:01:09.953641    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 11:01:09.953648    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 11:01:09.953656    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 11:01:09.953663    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 11:01:09.953668    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 11:01:09.953681    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 11:01:09.953693    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 11:01:09.953701    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 11:01:09.953709    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 11:01:09.953717    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 11:01:09.953725    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 11:01:11.955709    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Attempt 17
	I0816 11:01:11.955722    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:01:11.955778    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | hyperkit pid from json: 8045
	I0816 11:01:11.956984    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Searching for f2:78:42:8:64:71 in /var/db/dhcpd_leases ...
	I0816 11:01:11.957029    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 11:01:11.957042    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 11:01:11.957065    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 11:01:11.957072    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 11:01:11.957079    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 11:01:11.957087    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 11:01:11.957095    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 11:01:11.957101    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 11:01:11.957107    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 11:01:11.957114    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 11:01:11.957123    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 11:01:11.957136    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 11:01:11.957155    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 11:01:11.957167    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 11:01:11.957178    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 11:01:11.957187    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 11:01:11.957194    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 11:01:11.957210    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 11:01:13.958842    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Attempt 18
	I0816 11:01:13.958857    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:01:13.958966    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | hyperkit pid from json: 8045
	I0816 11:01:13.959749    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Searching for f2:78:42:8:64:71 in /var/db/dhcpd_leases ...
	I0816 11:01:13.959810    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 11:01:13.959838    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 11:01:13.959848    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 11:01:13.959856    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 11:01:13.959862    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 11:01:13.959869    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 11:01:13.959878    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 11:01:13.959885    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 11:01:13.959893    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 11:01:13.959900    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 11:01:13.959911    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 11:01:13.959918    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 11:01:13.959926    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 11:01:13.959933    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 11:01:13.959942    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 11:01:13.959954    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 11:01:13.959962    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 11:01:13.959983    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 11:01:15.961246    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Attempt 19
	I0816 11:01:15.961261    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:01:15.961316    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | hyperkit pid from json: 8045
	I0816 11:01:15.962186    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Searching for f2:78:42:8:64:71 in /var/db/dhcpd_leases ...
	I0816 11:01:15.962220    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 11:01:15.962227    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 11:01:15.962243    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 11:01:15.962261    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 11:01:15.962274    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 11:01:15.962282    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 11:01:15.962288    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 11:01:15.962298    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 11:01:15.962306    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 11:01:15.962314    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 11:01:15.962338    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 11:01:15.962351    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 11:01:15.962360    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 11:01:15.962366    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 11:01:15.962373    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 11:01:15.962386    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 11:01:15.962402    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 11:01:15.962415    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 11:01:17.964374    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Attempt 20
	I0816 11:01:17.964386    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:01:17.964422    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | hyperkit pid from json: 8045
	I0816 11:01:17.965432    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Searching for f2:78:42:8:64:71 in /var/db/dhcpd_leases ...
	I0816 11:01:17.965486    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 11:01:17.965499    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 11:01:17.965509    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 11:01:17.965517    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 11:01:17.965533    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 11:01:17.965545    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 11:01:17.965565    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 11:01:17.965582    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 11:01:17.965591    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 11:01:17.965600    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 11:01:17.965608    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 11:01:17.965616    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 11:01:17.965626    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 11:01:17.965638    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 11:01:17.965647    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 11:01:17.965655    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 11:01:17.965665    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 11:01:17.965671    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 11:01:19.967572    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Attempt 21
	I0816 11:01:19.967587    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:01:19.967637    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | hyperkit pid from json: 8045
	I0816 11:01:19.968427    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Searching for f2:78:42:8:64:71 in /var/db/dhcpd_leases ...
	I0816 11:01:19.968479    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 11:01:19.968490    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 11:01:19.968501    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 11:01:19.968511    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 11:01:19.968545    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 11:01:19.968559    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 11:01:19.968583    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 11:01:19.968593    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 11:01:19.968603    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 11:01:19.968619    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 11:01:19.968627    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 11:01:19.968635    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 11:01:19.968652    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 11:01:19.968664    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 11:01:19.968672    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 11:01:19.968679    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 11:01:19.968686    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 11:01:19.968691    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 11:01:21.969741    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Attempt 22
	I0816 11:01:21.969752    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:01:21.969830    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | hyperkit pid from json: 8045
	I0816 11:01:21.970770    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Searching for f2:78:42:8:64:71 in /var/db/dhcpd_leases ...
	I0816 11:01:21.970816    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 11:01:21.970830    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 11:01:21.970839    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 11:01:21.970846    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 11:01:21.970853    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 11:01:21.970862    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 11:01:21.970871    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 11:01:21.970880    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 11:01:21.970887    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 11:01:21.970895    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 11:01:21.970902    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 11:01:21.970910    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 11:01:21.970922    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 11:01:21.970930    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 11:01:21.970937    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 11:01:21.970946    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 11:01:21.970965    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 11:01:21.970978    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 11:01:23.972973    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Attempt 23
	I0816 11:01:23.972984    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:01:23.973045    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | hyperkit pid from json: 8045
	I0816 11:01:23.973908    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Searching for f2:78:42:8:64:71 in /var/db/dhcpd_leases ...
	I0816 11:01:23.973990    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 11:01:23.974003    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 11:01:23.974014    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 11:01:23.974022    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 11:01:23.974031    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 11:01:23.974038    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 11:01:23.974045    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 11:01:23.974055    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 11:01:23.974064    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 11:01:23.974072    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 11:01:23.974079    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 11:01:23.974090    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 11:01:23.974098    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 11:01:23.974114    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 11:01:23.974122    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 11:01:23.974131    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 11:01:23.974139    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 11:01:23.974148    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 11:01:25.974662    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Attempt 24
	I0816 11:01:25.974675    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:01:25.974736    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | hyperkit pid from json: 8045
	I0816 11:01:25.975659    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Searching for f2:78:42:8:64:71 in /var/db/dhcpd_leases ...
	I0816 11:01:25.975696    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 11:01:25.975703    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 11:01:25.975713    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 11:01:25.975757    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 11:01:25.975770    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 11:01:25.975777    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 11:01:25.975783    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 11:01:25.975792    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 11:01:25.975801    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 11:01:25.975808    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 11:01:25.975822    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 11:01:25.975829    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 11:01:25.975837    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 11:01:25.975844    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 11:01:25.975850    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 11:01:25.975857    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 11:01:25.975865    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 11:01:25.975873    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 11:01:27.977134    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Attempt 25
	I0816 11:01:27.977144    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:01:27.977263    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | hyperkit pid from json: 8045
	I0816 11:01:27.978033    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Searching for f2:78:42:8:64:71 in /var/db/dhcpd_leases ...
	I0816 11:01:27.978074    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 11:01:27.978083    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 11:01:27.978101    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 11:01:27.978107    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 11:01:27.978116    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 11:01:27.978125    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 11:01:27.978132    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 11:01:27.978138    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 11:01:27.978157    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 11:01:27.978171    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 11:01:27.978186    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 11:01:27.978198    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 11:01:27.978206    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 11:01:27.978214    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 11:01:27.978221    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 11:01:27.978228    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 11:01:27.978234    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 11:01:27.978241    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 11:01:29.978430    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Attempt 26
	I0816 11:01:29.978441    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:01:29.978519    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | hyperkit pid from json: 8045
	I0816 11:01:29.979301    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Searching for f2:78:42:8:64:71 in /var/db/dhcpd_leases ...
	I0816 11:01:29.979366    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 11:01:29.979377    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 11:01:29.979396    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 11:01:29.979404    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 11:01:29.979412    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 11:01:29.979421    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 11:01:29.979427    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 11:01:29.979435    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 11:01:29.979442    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 11:01:29.979450    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 11:01:29.979466    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 11:01:29.979479    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 11:01:29.979486    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 11:01:29.979493    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 11:01:29.979502    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 11:01:29.979516    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 11:01:29.979527    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 11:01:29.979537    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 11:01:31.980584    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Attempt 27
	I0816 11:01:31.980601    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:01:31.980716    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | hyperkit pid from json: 8045
	I0816 11:01:31.981509    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Searching for f2:78:42:8:64:71 in /var/db/dhcpd_leases ...
	I0816 11:01:31.981560    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 11:01:31.981570    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 11:01:31.981579    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 11:01:31.981585    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 11:01:31.981614    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 11:01:31.981627    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 11:01:31.981646    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 11:01:31.981657    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 11:01:31.981665    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 11:01:31.981674    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 11:01:31.981684    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 11:01:31.981692    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 11:01:31.981707    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 11:01:31.981720    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 11:01:31.981733    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 11:01:31.981742    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 11:01:31.981750    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 11:01:31.981757    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 11:01:33.983657    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Attempt 28
	I0816 11:01:33.984249    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:01:33.984282    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | hyperkit pid from json: 8045
	I0816 11:01:33.984635    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Searching for f2:78:42:8:64:71 in /var/db/dhcpd_leases ...
	I0816 11:01:33.984696    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 11:01:33.984717    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 11:01:33.984750    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 11:01:33.984782    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 11:01:33.984794    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 11:01:33.984811    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 11:01:33.984825    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 11:01:33.984834    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 11:01:33.984900    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 11:01:33.984926    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 11:01:33.984937    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 11:01:33.984945    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 11:01:33.984976    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 11:01:33.985005    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 11:01:33.985016    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 11:01:33.985025    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 11:01:33.985070    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 11:01:33.985256    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 11:01:35.984803    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Attempt 29
	I0816 11:01:35.984815    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 11:01:35.984904    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | hyperkit pid from json: 8045
	I0816 11:01:35.985753    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Searching for f2:78:42:8:64:71 in /var/db/dhcpd_leases ...
	I0816 11:01:35.985805    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 11:01:35.985818    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 11:01:35.985834    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 11:01:35.985852    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 11:01:35.985864    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 11:01:35.985873    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 11:01:35.985881    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 11:01:35.985889    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 11:01:35.985898    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 11:01:35.985905    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 11:01:35.985913    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 11:01:35.985919    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 11:01:35.985926    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 11:01:35.985934    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 11:01:35.985941    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 11:01:35.985949    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 11:01:35.985956    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 11:01:35.985964    7684 main.go:141] libmachine: (force-systemd-flag-576000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 11:01:37.988013    7684 client.go:171] duration metric: took 1m0.914163128s to LocalClient.Create
	I0816 11:01:39.990101    7684 start.go:128] duration metric: took 1m2.947890656s to createHost
	I0816 11:01:39.990171    7684 start.go:83] releasing machines lock for "force-systemd-flag-576000", held for 1m2.948011785s
	W0816 11:01:39.990239    7684 out.go:270] * Failed to start hyperkit VM. Running "minikube delete -p force-systemd-flag-576000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for f2:78:42:8:64:71
	* Failed to start hyperkit VM. Running "minikube delete -p force-systemd-flag-576000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for f2:78:42:8:64:71
	I0816 11:01:40.053327    7684 out.go:201] 
	W0816 11:01:40.074588    7684 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for f2:78:42:8:64:71
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for f2:78:42:8:64:71
	W0816 11:01:40.074599    7684 out.go:270] * 
	* 
	W0816 11:01:40.075276    7684 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 11:01:40.136371    7684 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-amd64 start -p force-systemd-flag-576000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-576000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p force-systemd-flag-576000 ssh "docker info --format {{.CgroupDriver}}": exit status 50 (181.925395ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node force-systemd-flag-576000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-amd64 -p force-systemd-flag-576000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 50
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-08-16 11:01:40.430465 -0700 PDT m=+4443.091909437
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-flag-576000 -n force-systemd-flag-576000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-flag-576000 -n force-systemd-flag-576000: exit status 7 (78.919473ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 11:01:40.507481    8063 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0816 11:01:40.507500    8063 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-576000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "force-systemd-flag-576000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-576000
E0816 11:01:45.696466    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/skaffold-359000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-576000: (5.239716609s)
--- FAIL: TestForceSystemdFlag (252.12s)

                                                
                                    
x
+
TestForceSystemdEnv (234.61s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-773000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit 
E0816 10:55:18.699293    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/functional-373000/client.crt: no such file or directory" logger="UnhandledError"
E0816 10:55:35.623844    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/functional-373000/client.crt: no such file or directory" logger="UnhandledError"
E0816 10:56:32.710594    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/addons-725000/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p force-systemd-env-773000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit : exit status 80 (3m49.035302961s)

                                                
                                                
-- stdout --
	* [force-systemd-env-773000] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19461
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1276/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1276/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the hyperkit driver based on user configuration
	* Starting "force-systemd-env-773000" primary control-plane node in "force-systemd-env-773000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "force-systemd-env-773000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 10:54:42.264790    7611 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:54:42.265067    7611 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:54:42.265072    7611 out.go:358] Setting ErrFile to fd 2...
	I0816 10:54:42.265076    7611 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:54:42.265236    7611 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1276/.minikube/bin
	I0816 10:54:42.266784    7611 out.go:352] Setting JSON to false
	I0816 10:54:42.288968    7611 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":5052,"bootTime":1723825830,"procs":438,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0816 10:54:42.289060    7611 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 10:54:42.311269    7611 out.go:177] * [force-systemd-env-773000] minikube v1.33.1 on Darwin 14.6.1
	I0816 10:54:42.352648    7611 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 10:54:42.352674    7611 notify.go:220] Checking for updates...
	I0816 10:54:42.394825    7611 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1276/kubeconfig
	I0816 10:54:42.415648    7611 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0816 10:54:42.436791    7611 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 10:54:42.457759    7611 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1276/.minikube
	I0816 10:54:42.478688    7611 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0816 10:54:42.500224    7611 config.go:182] Loaded profile config "offline-docker-087000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:54:42.500308    7611 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 10:54:42.528782    7611 out.go:177] * Using the hyperkit driver based on user configuration
	I0816 10:54:42.570774    7611 start.go:297] selected driver: hyperkit
	I0816 10:54:42.570785    7611 start.go:901] validating driver "hyperkit" against <nil>
	I0816 10:54:42.570796    7611 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 10:54:42.573493    7611 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:54:42.573613    7611 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19461-1276/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0816 10:54:42.581775    7611 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0816 10:54:42.585614    7611 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:54:42.585632    7611 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0816 10:54:42.585662    7611 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 10:54:42.585871    7611 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0816 10:54:42.585901    7611 cni.go:84] Creating CNI manager for ""
	I0816 10:54:42.585917    7611 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0816 10:54:42.585922    7611 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0816 10:54:42.585988    7611 start.go:340] cluster config:
	{Name:force-systemd-env-773000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-env-773000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 10:54:42.586075    7611 iso.go:125] acquiring lock: {Name:mkb430b43b5e7a8a683454482519837ed996c78d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:54:42.606905    7611 out.go:177] * Starting "force-systemd-env-773000" primary control-plane node in "force-systemd-env-773000" cluster
	I0816 10:54:42.648703    7611 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 10:54:42.648726    7611 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0816 10:54:42.648745    7611 cache.go:56] Caching tarball of preloaded images
	I0816 10:54:42.648846    7611 preload.go:172] Found /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0816 10:54:42.648855    7611 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 10:54:42.648921    7611 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/force-systemd-env-773000/config.json ...
	I0816 10:54:42.648938    7611 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/force-systemd-env-773000/config.json: {Name:mkc51d5212bc433b7c626dc72131ae269c9ebd20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:54:42.649228    7611 start.go:360] acquireMachinesLock for force-systemd-env-773000: {Name:mk0510f0432b1e24fd07d899155e85dff8756d5a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:55:22.136900    7611 start.go:364] duration metric: took 39.488860788s to acquireMachinesLock for "force-systemd-env-773000"
	I0816 10:55:22.136950    7611 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-773000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-env-773000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:55:22.137014    7611 start.go:125] createHost starting for "" (driver="hyperkit")
	I0816 10:55:22.158582    7611 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0816 10:55:22.158725    7611 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:55:22.158760    7611 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:55:22.167196    7611 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53854
	I0816 10:55:22.167569    7611 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:55:22.168008    7611 main.go:141] libmachine: Using API Version  1
	I0816 10:55:22.168018    7611 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:55:22.168265    7611 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:55:22.168385    7611 main.go:141] libmachine: (force-systemd-env-773000) Calling .GetMachineName
	I0816 10:55:22.168474    7611 main.go:141] libmachine: (force-systemd-env-773000) Calling .DriverName
	I0816 10:55:22.168558    7611 start.go:159] libmachine.API.Create for "force-systemd-env-773000" (driver="hyperkit")
	I0816 10:55:22.168580    7611 client.go:168] LocalClient.Create starting
	I0816 10:55:22.168612    7611 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem
	I0816 10:55:22.168661    7611 main.go:141] libmachine: Decoding PEM data...
	I0816 10:55:22.168683    7611 main.go:141] libmachine: Parsing certificate...
	I0816 10:55:22.168749    7611 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem
	I0816 10:55:22.168786    7611 main.go:141] libmachine: Decoding PEM data...
	I0816 10:55:22.168796    7611 main.go:141] libmachine: Parsing certificate...
	I0816 10:55:22.168808    7611 main.go:141] libmachine: Running pre-create checks...
	I0816 10:55:22.168814    7611 main.go:141] libmachine: (force-systemd-env-773000) Calling .PreCreateCheck
	I0816 10:55:22.168885    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:55:22.169030    7611 main.go:141] libmachine: (force-systemd-env-773000) Calling .GetConfigRaw
	I0816 10:55:22.179931    7611 main.go:141] libmachine: Creating machine...
	I0816 10:55:22.179941    7611 main.go:141] libmachine: (force-systemd-env-773000) Calling .Create
	I0816 10:55:22.180053    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:55:22.180210    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | I0816 10:55:22.180032    7632 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19461-1276/.minikube
	I0816 10:55:22.180259    7611 main.go:141] libmachine: (force-systemd-env-773000) Downloading /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19461-1276/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0816 10:55:22.441368    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | I0816 10:55:22.441269    7632 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-env-773000/id_rsa...
	I0816 10:55:22.585970    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | I0816 10:55:22.585862    7632 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-env-773000/force-systemd-env-773000.rawdisk...
	I0816 10:55:22.585996    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Writing magic tar header
	I0816 10:55:22.586009    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Writing SSH key tar header
	I0816 10:55:22.586591    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | I0816 10:55:22.586549    7632 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-env-773000 ...
	I0816 10:55:22.959825    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:55:22.959844    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-env-773000/hyperkit.pid
	I0816 10:55:22.959856    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Using UUID 65fb7d7d-ad90-447a-a2bd-39d8273c780e
	I0816 10:55:22.984824    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Generated MAC 32:6d:b1:37:9c:9b
	I0816 10:55:22.984841    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-773000
	I0816 10:55:22.984874    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | 2024/08/16 10:55:22 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-env-773000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"65fb7d7d-ad90-447a-a2bd-39d8273c780e", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-env-773000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-env-773000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-env-773000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]str
ing(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 10:55:22.984905    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | 2024/08/16 10:55:22 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-env-773000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"65fb7d7d-ad90-447a-a2bd-39d8273c780e", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-env-773000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-env-773000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-env-773000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]str
ing(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 10:55:22.984950    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | 2024/08/16 10:55:22 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-env-773000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "65fb7d7d-ad90-447a-a2bd-39d8273c780e", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-env-773000/force-systemd-env-773000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-env-773000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-env-773000/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-env-773000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-sys
temd-env-773000/bzimage,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-env-773000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-773000"}
	I0816 10:55:22.984981    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | 2024/08/16 10:55:22 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-env-773000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 65fb7d7d-ad90-447a-a2bd-39d8273c780e -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-env-773000/force-systemd-env-773000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-env-773000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-env-773000/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-env-773000/console-ring -f kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-env-773000/bzimage,/Users/jenkins/minikube-integration/19
461-1276/.minikube/machines/force-systemd-env-773000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-773000"
	I0816 10:55:22.985020    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | 2024/08/16 10:55:22 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0816 10:55:22.987892    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | 2024/08/16 10:55:22 DEBUG: hyperkit: Pid is 7633
	I0816 10:55:22.988397    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Attempt 0
	I0816 10:55:22.988413    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:55:22.988546    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | hyperkit pid from json: 7633
	I0816 10:55:22.989526    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Searching for 32:6d:b1:37:9c:9b in /var/db/dhcpd_leases ...
	I0816 10:55:22.989549    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:55:22.989585    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:55:22.989600    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:55:22.989616    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:55:22.989634    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:55:22.989646    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:55:22.989661    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:55:22.989678    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:55:22.989693    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:55:22.989706    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:55:22.989720    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:55:22.989734    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:55:22.989749    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:55:22.989762    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:55:22.989775    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:55:22.989788    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:55:22.989803    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:55:22.989831    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:55:22.995430    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | 2024/08/16 10:55:22 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0816 10:55:23.003354    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | 2024/08/16 10:55:23 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-env-773000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0816 10:55:23.004281    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | 2024/08/16 10:55:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:55:23.004305    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | 2024/08/16 10:55:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:55:23.004317    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | 2024/08/16 10:55:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:55:23.004332    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | 2024/08/16 10:55:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:55:23.376613    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | 2024/08/16 10:55:23 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0816 10:55:23.376638    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | 2024/08/16 10:55:23 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0816 10:55:23.491408    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | 2024/08/16 10:55:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:55:23.491423    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | 2024/08/16 10:55:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:55:23.491435    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | 2024/08/16 10:55:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:55:23.491448    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | 2024/08/16 10:55:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:55:23.492368    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | 2024/08/16 10:55:23 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0816 10:55:23.492380    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | 2024/08/16 10:55:23 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0816 10:55:24.991707    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Attempt 1
	I0816 10:55:24.991724    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:55:24.991827    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | hyperkit pid from json: 7633
	I0816 10:55:24.992620    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Searching for 32:6d:b1:37:9c:9b in /var/db/dhcpd_leases ...
	I0816 10:55:24.992673    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:55:24.992685    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:55:24.992697    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:55:24.992704    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:55:24.992720    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:55:24.992728    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:55:24.992734    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:55:24.992740    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:55:24.992747    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:55:24.992755    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:55:24.992767    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:55:24.992773    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:55:24.992788    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:55:24.992805    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:55:24.992815    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:55:24.992822    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:55:24.992839    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:55:24.992854    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:55:26.994133    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Attempt 2
	I0816 10:55:26.994149    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:55:26.994243    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | hyperkit pid from json: 7633
	I0816 10:55:26.995075    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Searching for 32:6d:b1:37:9c:9b in /var/db/dhcpd_leases ...
	I0816 10:55:26.995134    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:55:26.995149    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:55:26.995168    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:55:26.995174    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:55:26.995183    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:55:26.995192    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:55:26.995203    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:55:26.995215    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:55:26.995226    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:55:26.995234    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:55:26.995249    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:55:26.995259    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:55:26.995265    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:55:26.995274    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:55:26.995281    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:55:26.995289    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:55:26.995303    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:55:26.995315    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:55:28.849652    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | 2024/08/16 10:55:28 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0816 10:55:28.849822    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | 2024/08/16 10:55:28 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0816 10:55:28.849831    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | 2024/08/16 10:55:28 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0816 10:55:28.872137    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | 2024/08/16 10:55:28 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0816 10:55:28.997269    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Attempt 3
	I0816 10:55:28.997294    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:55:28.997472    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | hyperkit pid from json: 7633
	I0816 10:55:28.999028    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Searching for 32:6d:b1:37:9c:9b in /var/db/dhcpd_leases ...
	I0816 10:55:28.999149    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:55:28.999171    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:55:28.999228    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:55:28.999240    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:55:28.999296    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:55:28.999312    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:55:28.999322    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:55:28.999334    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:55:28.999366    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:55:28.999383    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:55:28.999400    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:55:28.999411    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:55:28.999445    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:55:28.999469    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:55:28.999493    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:55:28.999508    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:55:28.999524    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:55:28.999538    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:55:31.001291    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Attempt 4
	I0816 10:55:31.001308    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:55:31.001372    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | hyperkit pid from json: 7633
	I0816 10:55:31.002161    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Searching for 32:6d:b1:37:9c:9b in /var/db/dhcpd_leases ...
	I0816 10:55:31.002215    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:55:31.002226    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:55:31.002243    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:55:31.002260    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:55:31.002285    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:55:31.002299    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:55:31.002308    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:55:31.002334    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:55:31.002342    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:55:31.002350    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:55:31.002358    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:55:31.002366    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:55:31.002375    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:55:31.002381    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:55:31.002389    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:55:31.002399    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:55:31.002407    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:55:31.002416    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:55:33.004389    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Attempt 5
	I0816 10:55:33.004405    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:55:33.004494    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | hyperkit pid from json: 7633
	I0816 10:55:33.005304    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Searching for 32:6d:b1:37:9c:9b in /var/db/dhcpd_leases ...
	I0816 10:55:33.005349    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:55:33.005359    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:55:33.005369    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:55:33.005382    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:55:33.005391    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:55:33.005407    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:55:33.005415    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:55:33.005422    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:55:33.005430    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:55:33.005436    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:55:33.005444    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:55:33.005451    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:55:33.005459    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:55:33.005468    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:55:33.005475    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:55:33.005491    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:55:33.005503    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:55:33.005518    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:55:35.005705    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Attempt 6
	I0816 10:55:35.005718    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:55:35.005794    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | hyperkit pid from json: 7633
	I0816 10:55:35.006613    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Searching for 32:6d:b1:37:9c:9b in /var/db/dhcpd_leases ...
	I0816 10:55:35.006670    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:55:35.006683    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:55:35.006699    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:55:35.006706    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:55:35.006713    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:55:35.006720    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:55:35.006726    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:55:35.006736    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:55:35.006760    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:55:35.006772    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:55:35.006779    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:55:35.006785    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:55:35.006792    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:55:35.006806    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:55:35.006816    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:55:35.006822    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:55:35.006835    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:55:35.006845    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:55:37.008779    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Attempt 7
	I0816 10:55:37.008796    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:55:37.008854    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | hyperkit pid from json: 7633
	I0816 10:55:37.009676    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Searching for 32:6d:b1:37:9c:9b in /var/db/dhcpd_leases ...
	I0816 10:55:37.009730    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:55:37.009740    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:55:37.009749    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:55:37.009758    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:55:37.009779    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:55:37.009787    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:55:37.009795    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:55:37.009804    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:55:37.009811    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:55:37.009819    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:55:37.009834    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:55:37.009845    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:55:37.009856    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:55:37.009864    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:55:37.009871    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:55:37.009879    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:55:37.009886    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:55:37.009895    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:55:39.011505    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Attempt 8
	I0816 10:55:39.011519    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:55:39.011584    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | hyperkit pid from json: 7633
	I0816 10:55:39.012370    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Searching for 32:6d:b1:37:9c:9b in /var/db/dhcpd_leases ...
	I0816 10:55:39.012424    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:55:39.012437    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:55:39.012449    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:55:39.012459    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:55:39.012473    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:55:39.012487    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:55:39.012515    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:55:39.012527    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:55:39.012534    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:55:39.012540    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:55:39.012548    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:55:39.012557    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:55:39.012564    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:55:39.012582    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:55:39.012590    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:55:39.012599    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:55:39.012608    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:55:39.012616    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:55:41.013525    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Attempt 9
	I0816 10:55:41.013537    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:55:41.013611    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | hyperkit pid from json: 7633
	I0816 10:55:41.014384    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Searching for 32:6d:b1:37:9c:9b in /var/db/dhcpd_leases ...
	I0816 10:55:41.014431    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:55:41.014439    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:55:41.014447    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:55:41.014454    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:55:41.014465    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:55:41.014474    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:55:41.014501    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:55:41.014514    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:55:41.014533    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:55:41.014542    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:55:41.014549    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:55:41.014559    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:55:41.014567    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:55:41.014574    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:55:41.014583    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:55:41.014591    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:55:41.014598    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:55:41.014606    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:55:43.014728    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Attempt 10
	I0816 10:55:43.014749    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:55:43.014866    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | hyperkit pid from json: 7633
	I0816 10:55:43.015678    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Searching for 32:6d:b1:37:9c:9b in /var/db/dhcpd_leases ...
	I0816 10:55:43.015734    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:55:43.015745    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:55:43.015757    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:55:43.015770    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:55:43.015778    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:55:43.015787    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:55:43.015796    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:55:43.015808    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:55:43.015819    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:55:43.015829    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:55:43.015843    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:55:43.015855    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:55:43.015863    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:55:43.015869    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:55:43.015876    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:55:43.015882    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:55:43.015891    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:55:43.015900    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:55:45.016927    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Attempt 11
	I0816 10:55:45.016952    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:55:45.017014    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | hyperkit pid from json: 7633
	I0816 10:55:45.017803    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Searching for 32:6d:b1:37:9c:9b in /var/db/dhcpd_leases ...
	I0816 10:55:45.017847    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:55:45.017864    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:55:45.017879    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:55:45.017887    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:55:45.017919    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:55:45.017931    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:55:45.017943    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:55:45.017955    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:55:45.017964    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:55:45.017974    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:55:45.017981    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:55:45.017990    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:55:45.017997    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:55:45.018003    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:55:45.018015    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:55:45.018022    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:55:45.018029    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:55:45.018053    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:55:47.018337    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Attempt 12
	I0816 10:55:47.018351    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:55:47.018413    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | hyperkit pid from json: 7633
	I0816 10:55:47.019239    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Searching for 32:6d:b1:37:9c:9b in /var/db/dhcpd_leases ...
	I0816 10:55:47.019288    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:55:47.019300    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:55:47.019313    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:55:47.019320    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:55:47.019327    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:55:47.019336    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:55:47.019344    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:55:47.019350    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:55:47.019357    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:55:47.019365    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:55:47.019389    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:55:47.019397    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:55:47.019405    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:55:47.019411    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:55:47.019418    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:55:47.019427    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:55:47.019433    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:55:47.019441    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:55:49.020917    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Attempt 13
	I0816 10:55:49.020931    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:55:49.020996    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | hyperkit pid from json: 7633
	I0816 10:55:49.021822    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Searching for 32:6d:b1:37:9c:9b in /var/db/dhcpd_leases ...
	I0816 10:55:49.021867    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:55:49.021877    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:55:49.021896    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:55:49.021902    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:55:49.021912    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:55:49.021919    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:55:49.021935    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:55:49.021950    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:55:49.021962    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:55:49.021970    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:55:49.021979    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:55:49.021990    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:55:49.021997    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:55:49.022005    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:55:49.022012    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:55:49.022020    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:55:49.022028    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:55:49.022036    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:55:51.024005    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Attempt 14
	I0816 10:55:51.024018    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:55:51.024070    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | hyperkit pid from json: 7633
	I0816 10:55:51.024895    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Searching for 32:6d:b1:37:9c:9b in /var/db/dhcpd_leases ...
	I0816 10:55:51.024935    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:55:51.024946    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:55:51.024955    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:55:51.024961    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:55:51.024969    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:55:51.024993    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:55:51.025000    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:55:51.025009    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:55:51.025026    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:55:51.025038    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:55:51.025055    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:55:51.025067    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:55:51.025077    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:55:51.025086    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:55:51.025101    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:55:51.025114    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:55:51.025122    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:55:51.025130    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:55:53.025644    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Attempt 15
	I0816 10:55:53.025659    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:55:53.025724    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | hyperkit pid from json: 7633
	I0816 10:55:53.026557    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Searching for 32:6d:b1:37:9c:9b in /var/db/dhcpd_leases ...
	I0816 10:55:53.026600    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:55:53.026619    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:55:53.026636    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:55:53.026645    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:55:53.026654    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:55:53.026661    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:55:53.026677    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:55:53.026698    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:55:53.026711    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:55:53.026721    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:55:53.026728    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:55:53.026734    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:55:53.026747    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:55:53.026767    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:55:53.026776    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:55:53.026786    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:55:53.026800    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:55:53.026821    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:55:55.027607    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Attempt 16
	I0816 10:55:55.027619    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:55:55.027674    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | hyperkit pid from json: 7633
	I0816 10:55:55.028473    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Searching for 32:6d:b1:37:9c:9b in /var/db/dhcpd_leases ...
	I0816 10:55:55.028531    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:55:55.028543    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:55:55.028552    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:55:55.028562    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:55:55.028570    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:55:55.028578    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:55:55.028596    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:55:55.028609    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:55:55.028620    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:55:55.028627    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:55:55.028633    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:55:55.028641    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:55:55.028660    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:55:55.028678    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:55:55.028717    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:55:55.028742    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:55:55.028751    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:55:55.028759    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:55:57.030681    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Attempt 17
	I0816 10:55:57.030714    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:55:57.030765    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | hyperkit pid from json: 7633
	I0816 10:55:57.031552    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Searching for 32:6d:b1:37:9c:9b in /var/db/dhcpd_leases ...
	I0816 10:55:57.031596    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:55:57.031608    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:55:57.031616    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:55:57.031622    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:55:57.031642    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:55:57.031652    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:55:57.031660    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:55:57.031668    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:55:57.031676    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:55:57.031683    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:55:57.031691    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:55:57.031698    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:55:57.031708    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:55:57.031715    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:55:57.031723    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:55:57.031730    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:55:57.031738    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:55:57.031751    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:55:59.031841    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Attempt 18
	I0816 10:55:59.031853    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:55:59.031927    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | hyperkit pid from json: 7633
	I0816 10:55:59.032715    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Searching for 32:6d:b1:37:9c:9b in /var/db/dhcpd_leases ...
	I0816 10:55:59.032744    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:55:59.032756    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:55:59.032767    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:55:59.032774    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:55:59.032784    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:55:59.032790    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:55:59.032806    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:55:59.032821    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:55:59.032828    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:55:59.032836    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:55:59.032844    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:55:59.032852    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:55:59.032860    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:55:59.032873    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:55:59.032886    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:55:59.032894    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:55:59.032902    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:55:59.032920    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:56:01.034851    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Attempt 19
	I0816 10:56:01.034868    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:56:01.034926    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | hyperkit pid from json: 7633
	I0816 10:56:01.035709    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Searching for 32:6d:b1:37:9c:9b in /var/db/dhcpd_leases ...
	I0816 10:56:01.035753    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:56:01.035764    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:56:01.035771    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:56:01.035781    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:56:01.035789    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:56:01.035795    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:56:01.035803    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:56:01.035811    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:56:01.035834    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:56:01.035846    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:56:01.035854    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:56:01.035862    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:56:01.035869    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:56:01.035877    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:56:01.035884    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:56:01.035892    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:56:01.035899    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:56:01.035905    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:56:03.037500    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Attempt 20
	I0816 10:56:03.037512    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:56:03.037588    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | hyperkit pid from json: 7633
	I0816 10:56:03.038423    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Searching for 32:6d:b1:37:9c:9b in /var/db/dhcpd_leases ...
	I0816 10:56:03.038464    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:56:03.038475    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:56:03.038496    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:56:03.038504    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:56:03.038512    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:56:03.038533    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:56:03.038546    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:56:03.038561    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:56:03.038572    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:56:03.038584    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:56:03.038595    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:56:03.038603    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:56:03.038614    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:56:03.038622    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:56:03.038628    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:56:03.038635    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:56:03.038642    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:56:03.038649    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:56:05.040624    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Attempt 21
	I0816 10:56:05.040636    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:56:05.040710    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | hyperkit pid from json: 7633
	I0816 10:56:05.041484    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Searching for 32:6d:b1:37:9c:9b in /var/db/dhcpd_leases ...
	I0816 10:56:05.041535    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:56:05.041546    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:56:05.041553    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:56:05.041561    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:56:05.041588    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:56:05.041602    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:56:05.041610    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:56:05.041618    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:56:05.041625    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:56:05.041633    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:56:05.041640    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:56:05.041648    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:56:05.041662    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:56:05.041673    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:56:05.041688    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:56:05.041700    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:56:05.041715    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:56:05.041723    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:56:07.043214    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Attempt 22
	I0816 10:56:07.043228    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:56:07.043278    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | hyperkit pid from json: 7633
	I0816 10:56:07.044058    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Searching for 32:6d:b1:37:9c:9b in /var/db/dhcpd_leases ...
	I0816 10:56:07.044101    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:56:07.044119    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:56:07.044146    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:56:07.044162    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:56:07.044174    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:56:07.044181    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:56:07.044188    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:56:07.044197    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:56:07.044208    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:56:07.044216    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:56:07.044239    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:56:07.044272    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:56:07.044279    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:56:07.044287    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:56:07.044294    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:56:07.044301    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:56:07.044313    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:56:07.044322    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:56:09.044784    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Attempt 23
	I0816 10:56:09.044798    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:56:09.044858    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | hyperkit pid from json: 7633
	I0816 10:56:09.045643    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Searching for 32:6d:b1:37:9c:9b in /var/db/dhcpd_leases ...
	I0816 10:56:09.045696    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:56:09.045710    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:56:09.045733    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:56:09.045743    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:56:09.045759    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:56:09.045778    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:56:09.045791    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:56:09.045801    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:56:09.045808    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:56:09.045815    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:56:09.045821    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:56:09.045829    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:56:09.045835    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:56:09.045848    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:56:09.045861    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:56:09.045871    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:56:09.045882    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:56:09.045900    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:56:11.046288    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Attempt 24
	I0816 10:56:11.046307    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:56:11.046383    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | hyperkit pid from json: 7633
	I0816 10:56:11.047230    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Searching for 32:6d:b1:37:9c:9b in /var/db/dhcpd_leases ...
	I0816 10:56:11.047254    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:56:11.047266    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:56:11.047276    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:56:11.047285    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:56:11.047297    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:56:11.047306    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:56:11.047315    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:56:11.047323    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:56:11.047328    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:56:11.047346    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:56:11.047357    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:56:11.047366    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:56:11.047373    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:56:11.047380    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:56:11.047387    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:56:11.047397    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:56:11.047406    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:56:11.047415    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:56:13.049501    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Attempt 25
	I0816 10:56:13.049514    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:56:13.049568    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | hyperkit pid from json: 7633
	I0816 10:56:13.050373    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Searching for 32:6d:b1:37:9c:9b in /var/db/dhcpd_leases ...
	I0816 10:56:13.050418    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:56:13.050436    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:56:13.050444    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:56:13.050450    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:56:13.050457    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:56:13.050463    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:56:13.050479    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:56:13.050491    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:56:13.050508    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:56:13.050520    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:56:13.050527    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:56:13.050536    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:56:13.050544    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:56:13.050552    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:56:13.050561    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:56:13.050582    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:56:13.050590    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:56:13.050598    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:56:15.052602    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Attempt 26
	I0816 10:56:15.052626    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:56:15.052661    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | hyperkit pid from json: 7633
	I0816 10:56:15.053483    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Searching for 32:6d:b1:37:9c:9b in /var/db/dhcpd_leases ...
	I0816 10:56:15.053526    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:56:15.053536    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:56:15.053544    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:56:15.053551    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:56:15.053579    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:56:15.053589    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:56:15.053596    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:56:15.053604    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:56:15.053612    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:56:15.053619    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:56:15.053626    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:56:15.053640    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:56:15.053647    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:56:15.053655    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:56:15.053662    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:56:15.053668    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:56:15.053686    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:56:15.053699    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:56:17.055649    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Attempt 27
	I0816 10:56:17.055665    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:56:17.055723    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | hyperkit pid from json: 7633
	I0816 10:56:17.056661    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Searching for 32:6d:b1:37:9c:9b in /var/db/dhcpd_leases ...
	I0816 10:56:17.056707    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:56:17.056720    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:56:17.056731    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:56:17.056741    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:56:17.056748    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:56:17.056754    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:56:17.056771    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:56:17.056780    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:56:17.056789    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:56:17.056798    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:56:17.056818    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:56:17.056831    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:56:17.056839    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:56:17.056847    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:56:17.056856    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:56:17.056864    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:56:17.056881    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:56:17.056895    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:56:19.058861    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Attempt 28
	I0816 10:56:19.058877    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:56:19.058934    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | hyperkit pid from json: 7633
	I0816 10:56:19.059944    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Searching for 32:6d:b1:37:9c:9b in /var/db/dhcpd_leases ...
	I0816 10:56:19.059995    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:56:19.060010    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:56:19.060019    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:56:19.060025    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:56:19.060032    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:56:19.060039    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:56:19.060052    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:56:19.060065    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:56:19.060090    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:56:19.060103    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:56:19.060112    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:56:19.060121    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:56:19.060135    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:56:19.060146    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:56:19.060155    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:56:19.060161    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:56:19.060173    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:56:19.060187    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:56:21.062138    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Attempt 29
	I0816 10:56:21.062154    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:56:21.062202    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | hyperkit pid from json: 7633
	I0816 10:56:21.063004    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Searching for 32:6d:b1:37:9c:9b in /var/db/dhcpd_leases ...
	I0816 10:56:21.063043    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:56:21.063066    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:56:21.063094    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:56:21.063107    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:56:21.063115    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:56:21.063122    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:56:21.063128    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:56:21.063137    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:56:21.063147    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:56:21.063156    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:56:21.063163    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:56:21.063170    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:56:21.063178    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:56:21.063186    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:56:21.063203    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:56:21.063216    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:56:21.063237    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:56:21.063245    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:56:23.064283    7611 client.go:171] duration metric: took 1m0.897558351s to LocalClient.Create
	I0816 10:56:25.066371    7611 start.go:128] duration metric: took 1m2.931261597s to createHost
	I0816 10:56:25.066384    7611 start.go:83] releasing machines lock for "force-systemd-env-773000", held for 1m2.931402605s
	W0816 10:56:25.066417    7611 start.go:714] error starting host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 32:6d:b1:37:9c:9b
	I0816 10:56:25.066737    7611 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:56:25.066761    7611 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:56:25.075709    7611 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53856
	I0816 10:56:25.076257    7611 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:56:25.076742    7611 main.go:141] libmachine: Using API Version  1
	I0816 10:56:25.076778    7611 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:56:25.077066    7611 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:56:25.077437    7611 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:56:25.077479    7611 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:56:25.086129    7611 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53858
	I0816 10:56:25.086469    7611 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:56:25.086794    7611 main.go:141] libmachine: Using API Version  1
	I0816 10:56:25.086805    7611 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:56:25.087016    7611 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:56:25.087126    7611 main.go:141] libmachine: (force-systemd-env-773000) Calling .GetState
	I0816 10:56:25.087209    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:56:25.087276    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | hyperkit pid from json: 7633
	I0816 10:56:25.088248    7611 main.go:141] libmachine: (force-systemd-env-773000) Calling .DriverName
	I0816 10:56:25.129853    7611 out.go:177] * Deleting "force-systemd-env-773000" in hyperkit ...
	I0816 10:56:25.172113    7611 main.go:141] libmachine: (force-systemd-env-773000) Calling .Remove
	I0816 10:56:25.172228    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:56:25.172236    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:56:25.172310    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | hyperkit pid from json: 7633
	I0816 10:56:25.173232    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:56:25.173292    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | waiting for graceful shutdown
	I0816 10:56:26.175413    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:56:26.175547    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | hyperkit pid from json: 7633
	I0816 10:56:26.176482    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | waiting for graceful shutdown
	I0816 10:56:27.178576    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:56:27.178631    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | hyperkit pid from json: 7633
	I0816 10:56:27.180370    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | waiting for graceful shutdown
	I0816 10:56:28.182199    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:56:28.182251    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | hyperkit pid from json: 7633
	I0816 10:56:28.182978    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | waiting for graceful shutdown
	I0816 10:56:29.183716    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:56:29.183829    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | hyperkit pid from json: 7633
	I0816 10:56:29.184391    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | waiting for graceful shutdown
	I0816 10:56:30.185502    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:56:30.185598    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | hyperkit pid from json: 7633
	I0816 10:56:30.186607    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | sending sigkill
	I0816 10:56:30.186617    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	W0816 10:56:30.198368    7611 out.go:270] ! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 32:6d:b1:37:9c:9b
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 32:6d:b1:37:9c:9b
	I0816 10:56:30.198389    7611 start.go:729] Will try again in 5 seconds ...
	I0816 10:56:30.211321    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | 2024/08/16 10:56:30 WARN : hyperkit: failed to read stderr: EOF
	I0816 10:56:30.211339    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | 2024/08/16 10:56:30 WARN : hyperkit: failed to read stdout: EOF
	I0816 10:56:35.198641    7611 start.go:360] acquireMachinesLock for force-systemd-env-773000: {Name:mk0510f0432b1e24fd07d899155e85dff8756d5a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:57:28.021063    7611 start.go:364] duration metric: took 52.823999104s to acquireMachinesLock for "force-systemd-env-773000"
	I0816 10:57:28.021097    7611 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-773000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-env-773000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:57:28.021152    7611 start.go:125] createHost starting for "" (driver="hyperkit")
	I0816 10:57:28.063310    7611 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0816 10:57:28.063380    7611 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:57:28.063404    7611 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:57:28.072626    7611 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53862
	I0816 10:57:28.073221    7611 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:57:28.073707    7611 main.go:141] libmachine: Using API Version  1
	I0816 10:57:28.073731    7611 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:57:28.074038    7611 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:57:28.074250    7611 main.go:141] libmachine: (force-systemd-env-773000) Calling .GetMachineName
	I0816 10:57:28.074347    7611 main.go:141] libmachine: (force-systemd-env-773000) Calling .DriverName
	I0816 10:57:28.074440    7611 start.go:159] libmachine.API.Create for "force-systemd-env-773000" (driver="hyperkit")
	I0816 10:57:28.074458    7611 client.go:168] LocalClient.Create starting
	I0816 10:57:28.074485    7611 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem
	I0816 10:57:28.074538    7611 main.go:141] libmachine: Decoding PEM data...
	I0816 10:57:28.074549    7611 main.go:141] libmachine: Parsing certificate...
	I0816 10:57:28.074594    7611 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem
	I0816 10:57:28.074632    7611 main.go:141] libmachine: Decoding PEM data...
	I0816 10:57:28.074642    7611 main.go:141] libmachine: Parsing certificate...
	I0816 10:57:28.074657    7611 main.go:141] libmachine: Running pre-create checks...
	I0816 10:57:28.074662    7611 main.go:141] libmachine: (force-systemd-env-773000) Calling .PreCreateCheck
	I0816 10:57:28.074792    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:57:28.074823    7611 main.go:141] libmachine: (force-systemd-env-773000) Calling .GetConfigRaw
	I0816 10:57:28.084498    7611 main.go:141] libmachine: Creating machine...
	I0816 10:57:28.084507    7611 main.go:141] libmachine: (force-systemd-env-773000) Calling .Create
	I0816 10:57:28.084603    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:57:28.084795    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | I0816 10:57:28.084599    7673 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19461-1276/.minikube
	I0816 10:57:28.084847    7611 main.go:141] libmachine: (force-systemd-env-773000) Downloading /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19461-1276/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0816 10:57:28.408713    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | I0816 10:57:28.408620    7673 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-env-773000/id_rsa...
	I0816 10:57:28.578740    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | I0816 10:57:28.578652    7673 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-env-773000/force-systemd-env-773000.rawdisk...
	I0816 10:57:28.578756    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Writing magic tar header
	I0816 10:57:28.578766    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Writing SSH key tar header
	I0816 10:57:28.579366    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | I0816 10:57:28.579324    7673 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-env-773000 ...
	I0816 10:57:28.953501    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:57:28.953523    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-env-773000/hyperkit.pid
	I0816 10:57:28.953533    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Using UUID d3a27d15-7ee1-4f43-a045-eb53c427a8bf
	I0816 10:57:28.978947    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Generated MAC aa:c1:87:f2:b5:1b
	I0816 10:57:28.978963    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-773000
	I0816 10:57:28.978998    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | 2024/08/16 10:57:28 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-env-773000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"d3a27d15-7ee1-4f43-a045-eb53c427a8bf", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00011e330)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-env-773000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-env-773000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-env-773000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]str
ing(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 10:57:28.979026    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | 2024/08/16 10:57:28 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-env-773000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"d3a27d15-7ee1-4f43-a045-eb53c427a8bf", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00011e330)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-env-773000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-env-773000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-env-773000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]str
ing(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 10:57:28.979100    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | 2024/08/16 10:57:28 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-env-773000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "d3a27d15-7ee1-4f43-a045-eb53c427a8bf", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-env-773000/force-systemd-env-773000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-env-773000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-env-773000/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-env-773000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-sys
temd-env-773000/bzimage,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-env-773000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-773000"}
	I0816 10:57:28.979152    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | 2024/08/16 10:57:28 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-env-773000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U d3a27d15-7ee1-4f43-a045-eb53c427a8bf -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-env-773000/force-systemd-env-773000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-env-773000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-env-773000/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-env-773000/console-ring -f kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-env-773000/bzimage,/Users/jenkins/minikube-integration/19
461-1276/.minikube/machines/force-systemd-env-773000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-773000"
	I0816 10:57:28.979169    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | 2024/08/16 10:57:28 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0816 10:57:28.982239    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | 2024/08/16 10:57:28 DEBUG: hyperkit: Pid is 7683
	I0816 10:57:28.982833    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Attempt 0
	I0816 10:57:28.982849    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:57:28.982935    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | hyperkit pid from json: 7683
	I0816 10:57:28.983901    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Searching for aa:c1:87:f2:b5:1b in /var/db/dhcpd_leases ...
	I0816 10:57:28.983981    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:57:28.984006    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:57:28.984018    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:57:28.984069    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:57:28.984090    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:57:28.984099    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:57:28.984109    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:57:28.984117    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:57:28.984123    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:57:28.984219    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:57:28.984263    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:57:28.984277    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:57:28.984294    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:57:28.984310    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:57:28.984321    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:57:28.984340    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:57:28.984357    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:57:28.984372    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:57:28.990409    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | 2024/08/16 10:57:28 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0816 10:57:28.998580    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | 2024/08/16 10:57:28 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/force-systemd-env-773000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0816 10:57:28.999532    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | 2024/08/16 10:57:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:57:28.999545    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | 2024/08/16 10:57:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:57:28.999553    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | 2024/08/16 10:57:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:57:28.999562    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | 2024/08/16 10:57:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:57:29.372933    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | 2024/08/16 10:57:29 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0816 10:57:29.372949    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | 2024/08/16 10:57:29 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0816 10:57:29.487700    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | 2024/08/16 10:57:29 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:57:29.487716    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | 2024/08/16 10:57:29 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:57:29.487729    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | 2024/08/16 10:57:29 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:57:29.487754    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | 2024/08/16 10:57:29 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:57:29.488606    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | 2024/08/16 10:57:29 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0816 10:57:29.488621    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | 2024/08/16 10:57:29 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0816 10:57:30.984504    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Attempt 1
	I0816 10:57:30.984521    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:57:30.984631    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | hyperkit pid from json: 7683
	I0816 10:57:30.985415    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Searching for aa:c1:87:f2:b5:1b in /var/db/dhcpd_leases ...
	I0816 10:57:30.985468    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:57:30.985479    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:57:30.985498    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:57:30.985505    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:57:30.985512    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:57:30.985519    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:57:30.985533    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:57:30.985543    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:57:30.985557    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:57:30.985570    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:57:30.985578    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:57:30.985586    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:57:30.985601    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:57:30.985615    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:57:30.985623    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:57:30.985645    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:57:30.985653    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:57:30.985662    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:57:32.986596    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Attempt 2
	I0816 10:57:32.986613    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:57:32.986626    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | hyperkit pid from json: 7683
	I0816 10:57:32.987616    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Searching for aa:c1:87:f2:b5:1b in /var/db/dhcpd_leases ...
	I0816 10:57:32.987636    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:57:32.987643    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:57:32.987658    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:57:32.987673    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:57:32.987683    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:57:32.987693    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:57:32.987712    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:57:32.987719    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:57:32.987729    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:57:32.987737    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:57:32.987745    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:57:32.987755    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:57:32.987770    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:57:32.987783    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:57:32.987791    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:57:32.987798    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:57:32.987812    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:57:32.987824    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:57:34.853070    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | 2024/08/16 10:57:34 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0816 10:57:34.853253    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | 2024/08/16 10:57:34 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0816 10:57:34.853263    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | 2024/08/16 10:57:34 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0816 10:57:34.874478    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | 2024/08/16 10:57:34 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0816 10:57:34.988399    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Attempt 3
	I0816 10:57:34.988427    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:57:34.988607    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | hyperkit pid from json: 7683
	I0816 10:57:34.990053    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Searching for aa:c1:87:f2:b5:1b in /var/db/dhcpd_leases ...
	I0816 10:57:34.990169    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:57:34.990192    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:57:34.990222    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:57:34.990242    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:57:34.990256    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:57:34.990279    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:57:34.990309    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:57:34.990332    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:57:34.990365    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:57:34.990399    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:57:34.990441    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:57:34.990453    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:57:34.990462    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:57:34.990470    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:57:34.990487    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:57:34.990499    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:57:34.990519    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:57:34.990541    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:57:36.992288    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Attempt 4
	I0816 10:57:36.992304    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:57:36.992409    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | hyperkit pid from json: 7683
	I0816 10:57:36.993194    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Searching for aa:c1:87:f2:b5:1b in /var/db/dhcpd_leases ...
	I0816 10:57:36.993260    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:57:36.993273    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:57:36.993292    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:57:36.993302    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:57:36.993311    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:57:36.993320    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:57:36.993327    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:57:36.993336    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:57:36.993343    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:57:36.993351    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:57:36.993367    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:57:36.993380    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:57:36.993388    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:57:36.993395    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:57:36.993412    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:57:36.993421    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:57:36.993436    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:57:36.993448    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:57:38.994583    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Attempt 5
	I0816 10:57:38.994599    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:57:38.994734    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | hyperkit pid from json: 7683
	I0816 10:57:38.995518    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Searching for aa:c1:87:f2:b5:1b in /var/db/dhcpd_leases ...
	I0816 10:57:38.995567    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:57:38.995576    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:57:38.995596    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:57:38.995606    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:57:38.995630    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:57:38.995654    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:57:38.995666    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:57:38.995675    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:57:38.995683    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:57:38.995696    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:57:38.995706    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:57:38.995714    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:57:38.995722    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:57:38.995729    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:57:38.995737    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:57:38.995744    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:57:38.995752    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:57:38.995760    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:57:40.996326    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Attempt 6
	I0816 10:57:40.996338    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:57:40.996479    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | hyperkit pid from json: 7683
	I0816 10:57:40.997249    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Searching for aa:c1:87:f2:b5:1b in /var/db/dhcpd_leases ...
	I0816 10:57:40.997288    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:57:40.997297    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:57:40.997307    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:57:40.997314    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:57:40.997329    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:57:40.997338    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:57:40.997345    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:57:40.997351    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:57:40.997368    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:57:40.997379    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:57:40.997387    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:57:40.997396    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:57:40.997403    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:57:40.997409    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:57:40.997416    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:57:40.997423    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:57:40.997441    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:57:40.997457    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:57:42.998211    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Attempt 7
	I0816 10:57:42.998223    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:57:42.998291    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | hyperkit pid from json: 7683
	I0816 10:57:42.999057    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Searching for aa:c1:87:f2:b5:1b in /var/db/dhcpd_leases ...
	I0816 10:57:42.999104    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:57:42.999117    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:57:42.999138    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:57:42.999148    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:57:42.999159    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:57:42.999167    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:57:42.999174    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:57:42.999180    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:57:42.999186    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:57:42.999193    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:57:42.999201    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:57:42.999208    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:57:42.999217    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:57:42.999226    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:57:42.999236    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:57:42.999250    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:57:42.999261    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:57:42.999271    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:57:45.001273    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Attempt 8
	I0816 10:57:45.001290    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:57:45.001331    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | hyperkit pid from json: 7683
	I0816 10:57:45.002174    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Searching for aa:c1:87:f2:b5:1b in /var/db/dhcpd_leases ...
	I0816 10:57:45.002218    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:57:45.002228    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:57:45.002239    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:57:45.002245    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:57:45.002269    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:57:45.002281    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:57:45.002297    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:57:45.002309    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:57:45.002319    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:57:45.002327    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:57:45.002338    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:57:45.002345    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:57:45.002353    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:57:45.002362    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:57:45.002385    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:57:45.002402    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:57:45.002410    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:57:45.002416    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:57:47.002870    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Attempt 9
	I0816 10:57:47.002887    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:57:47.002956    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | hyperkit pid from json: 7683
	I0816 10:57:47.003782    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Searching for aa:c1:87:f2:b5:1b in /var/db/dhcpd_leases ...
	I0816 10:57:47.003817    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:57:47.003833    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:57:47.003843    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:57:47.003855    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:57:47.003864    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:57:47.003873    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:57:47.003891    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:57:47.003905    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:57:47.003919    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:57:47.003930    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:57:47.003938    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:57:47.003945    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:57:47.003965    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:57:47.003977    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:57:47.003984    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:57:47.003993    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:57:47.004004    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:57:47.004013    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:57:49.004561    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Attempt 10
	I0816 10:57:49.004578    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:57:49.004636    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | hyperkit pid from json: 7683
	I0816 10:57:49.005477    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Searching for aa:c1:87:f2:b5:1b in /var/db/dhcpd_leases ...
	I0816 10:57:49.005504    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:57:49.005512    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:57:49.005530    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:57:49.005540    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:57:49.005555    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:57:49.005567    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:57:49.005575    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:57:49.005581    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:57:49.005588    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:57:49.005596    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:57:49.005604    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:57:49.005615    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:57:49.005639    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:57:49.005652    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:57:49.005660    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:57:49.005680    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:57:49.005698    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:57:49.005711    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:57:51.005756    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Attempt 11
	I0816 10:57:51.005772    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:57:51.005839    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | hyperkit pid from json: 7683
	I0816 10:57:51.006633    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Searching for aa:c1:87:f2:b5:1b in /var/db/dhcpd_leases ...
	I0816 10:57:51.006689    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:57:51.006700    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:57:51.006707    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:57:51.006714    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:57:51.006721    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:57:51.006727    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:57:51.006735    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:57:51.006744    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:57:51.006752    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:57:51.006759    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:57:51.006771    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:57:51.006781    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:57:51.006789    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:57:51.006798    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:57:51.006805    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:57:51.006813    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:57:51.006820    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:57:51.006827    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:57:53.006892    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Attempt 12
	I0816 10:57:53.006908    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:57:53.006977    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | hyperkit pid from json: 7683
	I0816 10:57:53.007769    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Searching for aa:c1:87:f2:b5:1b in /var/db/dhcpd_leases ...
	I0816 10:57:53.007813    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:57:53.007829    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:57:53.007837    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:57:53.007847    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:57:53.007856    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:57:53.007865    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:57:53.007877    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:57:53.007885    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:57:53.007894    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:57:53.007901    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:57:53.007909    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:57:53.007931    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:57:53.007942    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:57:53.007956    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:57:53.007965    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:57:53.007972    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:57:53.007981    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:57:53.007991    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:57:55.009941    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Attempt 13
	I0816 10:57:55.009956    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:57:55.010024    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | hyperkit pid from json: 7683
	I0816 10:57:55.010851    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Searching for aa:c1:87:f2:b5:1b in /var/db/dhcpd_leases ...
	I0816 10:57:55.010888    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:57:55.010900    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:57:55.010919    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:57:55.010925    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:57:55.010932    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:57:55.010939    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:57:55.010946    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:57:55.010953    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:57:55.010972    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:57:55.010984    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:57:55.010991    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:57:55.011000    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:57:55.011011    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:57:55.011021    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:57:55.011029    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:57:55.011037    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:57:55.011052    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:57:55.011063    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:57:57.013055    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Attempt 14
	I0816 10:57:57.013072    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:57:57.013122    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | hyperkit pid from json: 7683
	I0816 10:57:57.013902    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Searching for aa:c1:87:f2:b5:1b in /var/db/dhcpd_leases ...
	I0816 10:57:57.013936    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:57:57.013944    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:57:57.013954    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:57:57.013962    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:57:57.013969    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:57:57.013978    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:57:57.014005    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:57:57.014019    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:57:57.014030    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:57:57.014048    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:57:57.014054    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:57:57.014061    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:57:57.014071    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:57:57.014077    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:57:57.014085    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:57:57.014092    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:57:57.014100    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:57:57.014119    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:57:59.014219    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Attempt 15
	I0816 10:57:59.014233    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:57:59.014339    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | hyperkit pid from json: 7683
	I0816 10:57:59.015082    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Searching for aa:c1:87:f2:b5:1b in /var/db/dhcpd_leases ...
	I0816 10:57:59.015143    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:57:59.015156    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:57:59.015170    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:57:59.015182    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:57:59.015199    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:57:59.015211    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:57:59.015218    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:57:59.015226    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:57:59.015237    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:57:59.015246    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:57:59.015257    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:57:59.015265    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:57:59.015281    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:57:59.015292    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:57:59.015308    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:57:59.015318    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:57:59.015325    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:57:59.015338    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:58:01.015910    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Attempt 16
	I0816 10:58:01.015925    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:58:01.015984    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | hyperkit pid from json: 7683
	I0816 10:58:01.016841    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Searching for aa:c1:87:f2:b5:1b in /var/db/dhcpd_leases ...
	I0816 10:58:01.016854    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:58:01.016873    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:58:01.016882    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:58:01.016889    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:58:01.016896    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:58:01.016902    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:58:01.016908    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:58:01.016915    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:58:01.016921    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:58:01.016928    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:58:01.016934    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:58:01.016941    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:58:01.016948    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:58:01.016955    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:58:01.016962    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:58:01.016978    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:58:01.016989    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:58:01.016999    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:58:03.017138    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Attempt 17
	I0816 10:58:03.017152    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:58:03.017251    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | hyperkit pid from json: 7683
	I0816 10:58:03.018069    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Searching for aa:c1:87:f2:b5:1b in /var/db/dhcpd_leases ...
	I0816 10:58:03.018078    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:58:03.018088    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:58:03.018094    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:58:03.018102    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:58:03.018110    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:58:03.018121    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:58:03.018136    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:58:03.018144    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:58:03.018152    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:58:03.018162    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:58:03.018170    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:58:03.018177    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:58:03.018185    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:58:03.018193    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:58:03.018208    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:58:03.018222    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:58:03.018236    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:58:03.018246    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:58:05.019660    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Attempt 18
	I0816 10:58:05.019674    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:58:05.019704    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | hyperkit pid from json: 7683
	I0816 10:58:05.020574    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Searching for aa:c1:87:f2:b5:1b in /var/db/dhcpd_leases ...
	I0816 10:58:05.020590    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:58:05.020616    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:58:05.020627    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:58:05.020637    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:58:05.020646    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:58:05.020656    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:58:05.020672    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:58:05.020684    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:58:05.020692    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:58:05.020701    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:58:05.020708    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:58:05.020716    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:58:05.020725    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:58:05.020733    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:58:05.020740    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:58:05.020748    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:58:05.020760    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:58:05.020770    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:58:07.022510    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Attempt 19
	I0816 10:58:07.022535    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:58:07.022575    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | hyperkit pid from json: 7683
	I0816 10:58:07.023456    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Searching for aa:c1:87:f2:b5:1b in /var/db/dhcpd_leases ...
	I0816 10:58:07.023502    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:58:07.023517    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:58:07.023527    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:58:07.023534    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:58:07.023542    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:58:07.023550    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:58:07.023557    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:58:07.023565    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:58:07.023583    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:58:07.023596    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:58:07.023605    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:58:07.023613    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:58:07.023625    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:58:07.023633    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:58:07.023643    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:58:07.023651    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:58:07.023658    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:58:07.023666    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:58:09.025633    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Attempt 20
	I0816 10:58:09.025646    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:58:09.025696    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | hyperkit pid from json: 7683
	I0816 10:58:09.026511    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Searching for aa:c1:87:f2:b5:1b in /var/db/dhcpd_leases ...
	I0816 10:58:09.026555    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:58:09.026572    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:58:09.026594    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:58:09.026606    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:58:09.026614    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:58:09.026625    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:58:09.026632    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:58:09.026642    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:58:09.026649    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:58:09.026656    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:58:09.026662    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:58:09.026668    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:58:09.026674    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:58:09.026682    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:58:09.026689    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:58:09.026697    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:58:09.026705    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:58:09.026711    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:58:11.028657    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Attempt 21
	I0816 10:58:11.028672    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:58:11.028737    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | hyperkit pid from json: 7683
	I0816 10:58:11.029526    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Searching for aa:c1:87:f2:b5:1b in /var/db/dhcpd_leases ...
	I0816 10:58:11.029565    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:58:11.029574    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:58:11.029582    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:58:11.029597    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:58:11.029607    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:58:11.029632    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:58:11.029646    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:58:11.029654    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:58:11.029663    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:58:11.029670    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:58:11.029678    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:58:11.029692    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:58:11.029704    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:58:11.029716    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:58:11.029726    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:58:11.029734    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:58:11.029747    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:58:11.029763    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:58:13.029864    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Attempt 22
	I0816 10:58:13.029874    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:58:13.029944    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | hyperkit pid from json: 7683
	I0816 10:58:13.030726    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Searching for aa:c1:87:f2:b5:1b in /var/db/dhcpd_leases ...
	I0816 10:58:13.030780    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:58:13.030791    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:58:13.030811    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:58:13.030829    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:58:13.030852    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:58:13.030863    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:58:13.030880    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:58:13.030893    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:58:13.030901    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:58:13.030910    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:58:13.030917    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:58:13.030925    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:58:13.030940    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:58:13.030953    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:58:13.030962    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:58:13.030970    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:58:13.030982    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:58:13.030995    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:58:15.032907    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Attempt 23
	I0816 10:58:15.032918    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:58:15.032967    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | hyperkit pid from json: 7683
	I0816 10:58:15.033775    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Searching for aa:c1:87:f2:b5:1b in /var/db/dhcpd_leases ...
	I0816 10:58:15.033823    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:58:15.033835    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:58:15.033851    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:58:15.033859    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:58:15.033868    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:58:15.033877    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:58:15.033884    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:58:15.033891    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:58:15.033909    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:58:15.033921    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:58:15.033931    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:58:15.033939    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:58:15.033946    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:58:15.033954    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:58:15.033962    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:58:15.033970    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:58:15.033977    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:58:15.033986    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:58:17.034112    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Attempt 24
	I0816 10:58:17.034130    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:58:17.034198    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | hyperkit pid from json: 7683
	I0816 10:58:17.034971    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Searching for aa:c1:87:f2:b5:1b in /var/db/dhcpd_leases ...
	I0816 10:58:17.035017    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:58:17.035027    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:58:17.035047    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:58:17.035066    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:58:17.035078    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:58:17.035090    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:58:17.035100    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:58:17.035108    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:58:17.035116    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:58:17.035130    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:58:17.035139    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:58:17.035146    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:58:17.035154    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:58:17.035169    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:58:17.035181    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:58:17.035197    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:58:17.035206    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:58:17.035222    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:58:19.037133    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Attempt 25
	I0816 10:58:19.037146    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:58:19.037283    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | hyperkit pid from json: 7683
	I0816 10:58:19.038079    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Searching for aa:c1:87:f2:b5:1b in /var/db/dhcpd_leases ...
	I0816 10:58:19.038123    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:58:19.038137    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:58:19.038148    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:58:19.038165    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:58:19.038173    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:58:19.038180    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:58:19.038189    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:58:19.038195    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:58:19.038202    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:58:19.038210    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:58:19.038217    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:58:19.038225    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:58:19.038240    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:58:19.038250    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:58:19.038258    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:58:19.038265    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:58:19.038281    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:58:19.038295    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:58:21.038406    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Attempt 26
	I0816 10:58:21.038417    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:58:21.038475    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | hyperkit pid from json: 7683
	I0816 10:58:21.039241    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Searching for aa:c1:87:f2:b5:1b in /var/db/dhcpd_leases ...
	I0816 10:58:21.039311    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:58:21.039321    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:58:21.039333    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:58:21.039341    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:58:21.039364    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:58:21.039378    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:58:21.039389    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:58:21.039401    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:58:21.039408    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:58:21.039417    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:58:21.039424    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:58:21.039432    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:58:21.039438    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:58:21.039445    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:58:21.039452    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:58:21.039460    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:58:21.039468    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:58:21.039477    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:58:23.041401    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Attempt 27
	I0816 10:58:23.041414    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:58:23.041472    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | hyperkit pid from json: 7683
	I0816 10:58:23.042253    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Searching for aa:c1:87:f2:b5:1b in /var/db/dhcpd_leases ...
	I0816 10:58:23.042303    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:58:23.042313    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:58:23.042328    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:58:23.042338    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:58:23.042348    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:58:23.042360    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:58:23.042385    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:58:23.042394    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:58:23.042402    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:58:23.042410    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:58:23.042419    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:58:23.042426    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:58:23.042433    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:58:23.042439    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:58:23.042445    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:58:23.042452    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:58:23.042460    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:58:23.042468    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:58:25.042845    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Attempt 28
	I0816 10:58:25.042857    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:58:25.042924    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | hyperkit pid from json: 7683
	I0816 10:58:25.043718    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Searching for aa:c1:87:f2:b5:1b in /var/db/dhcpd_leases ...
	I0816 10:58:25.043759    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:58:25.043770    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:58:25.043782    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:58:25.043788    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:58:25.043797    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:58:25.043805    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:58:25.043814    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:58:25.043827    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:58:25.043836    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:58:25.043850    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:58:25.043869    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:58:25.043881    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:58:25.043889    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:58:25.043900    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:58:25.043911    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:58:25.043920    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:58:25.043928    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:58:25.043936    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:58:27.045878    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Attempt 29
	I0816 10:58:27.045893    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:58:27.045969    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | hyperkit pid from json: 7683
	I0816 10:58:27.046779    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Searching for aa:c1:87:f2:b5:1b in /var/db/dhcpd_leases ...
	I0816 10:58:27.046831    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0816 10:58:27.046847    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:c0:fb:22:14:60 ID:1,1a:c0:fb:22:14:60 Lease:0x66c0e366}
	I0816 10:58:27.046864    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:1a:17:ff:79:df:48 ID:1,1a:17:ff:79:df:48 Lease:0x66c0e2a6}
	I0816 10:58:27.046873    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:6e:c9:f3:6a:2:88 ID:1,6e:c9:f3:6a:2:88 Lease:0x66c0e219}
	I0816 10:58:27.046881    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:19:f8:9f:de:df ID:1,26:19:f8:9f:de:df Lease:0x66bf900f}
	I0816 10:58:27.046888    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9e:66:f2:63:c1:16 ID:1,9e:66:f2:63:c1:16 Lease:0x66c0e1d7}
	I0816 10:58:27.046905    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:26:26:b7:e2:7f:92 ID:1,26:26:b7:e2:7f:92 Lease:0x66c0e1ab}
	I0816 10:58:27.046917    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:72:2e:7f:4e:66:cf ID:1,72:2e:7f:4e:66:cf Lease:0x66c0df53}
	I0816 10:58:27.046925    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:46:b7:7:2a:99:75 ID:1,46:b7:7:2a:99:75 Lease:0x66c0df2e}
	I0816 10:58:27.046933    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:26:16:ff:33:1b:5d ID:1,26:16:ff:33:1b:5d Lease:0x66c0ded1}
	I0816 10:58:27.046940    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:12:4f:d:a3:c4:d4 ID:1,12:4f:d:a3:c4:d4 Lease:0x66bf8d45}
	I0816 10:58:27.046946    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:58:27.046953    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66bf8d18}
	I0816 10:58:27.046961    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:58:27.046968    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:58:27.046976    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:58:27.046983    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:58:27.046991    7611 main.go:141] libmachine: (force-systemd-env-773000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:58:29.047319    7611 client.go:171] duration metric: took 1m0.974722392s to LocalClient.Create
	I0816 10:58:31.049365    7611 start.go:128] duration metric: took 1m3.030129308s to createHost
	I0816 10:58:31.049438    7611 start.go:83] releasing machines lock for "force-systemd-env-773000", held for 1m3.030243239s
	W0816 10:58:31.049508    7611 out.go:270] * Failed to start hyperkit VM. Running "minikube delete -p force-systemd-env-773000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for aa:c1:87:f2:b5:1b
	* Failed to start hyperkit VM. Running "minikube delete -p force-systemd-env-773000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for aa:c1:87:f2:b5:1b
	I0816 10:58:31.112723    7611 out.go:201] 
	W0816 10:58:31.133742    7611 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for aa:c1:87:f2:b5:1b
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for aa:c1:87:f2:b5:1b
	W0816 10:58:31.133760    7611 out.go:270] * 
	* 
	W0816 10:58:31.134535    7611 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 10:58:31.196789    7611 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-amd64 start -p force-systemd-env-773000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-773000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p force-systemd-env-773000 ssh "docker info --format {{.CgroupDriver}}": exit status 50 (180.588466ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node force-systemd-env-773000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-amd64 -p force-systemd-env-773000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 50
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-08-16 10:58:31.488396 -0700 PDT m=+4254.144048852
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-env-773000 -n force-systemd-env-773000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-env-773000 -n force-systemd-env-773000: exit status 7 (77.587653ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 10:58:31.564081    7707 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0816 10:58:31.564102    7707 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-773000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "force-systemd-env-773000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-773000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-773000: (5.24746436s)
--- FAIL: TestForceSystemdEnv (234.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (194.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-286000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperkit 
E0816 10:02:00.482226    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/addons-725000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ha-286000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperkit : exit status 90 (3m11.529520343s)

                                                
                                                
-- stdout --
	* [ha-286000] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19461
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1276/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1276/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "ha-286000" primary control-plane node in "ha-286000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: default-storageclass, storage-provisioner
	
	* Starting "ha-286000-m02" control-plane node in "ha-286000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Found network options:
	  - NO_PROXY=192.169.0.5
	* Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	  - env NO_PROXY=192.169.0.5
	* Verifying Kubernetes components...
	
	* Starting "ha-286000-m03" control-plane node in "ha-286000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Found network options:
	  - NO_PROXY=192.169.0.5,192.169.0.6
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 10:01:48.113296    3758 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:01:48.113462    3758 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:01:48.113467    3758 out.go:358] Setting ErrFile to fd 2...
	I0816 10:01:48.113470    3758 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:01:48.113648    3758 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1276/.minikube/bin
	I0816 10:01:48.115192    3758 out.go:352] Setting JSON to false
	I0816 10:01:48.140204    3758 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":1878,"bootTime":1723825830,"procs":433,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0816 10:01:48.140316    3758 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 10:01:48.194498    3758 out.go:177] * [ha-286000] minikube v1.33.1 on Darwin 14.6.1
	I0816 10:01:48.236650    3758 notify.go:220] Checking for updates...
	I0816 10:01:48.262360    3758 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 10:01:48.320415    3758 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1276/kubeconfig
	I0816 10:01:48.381368    3758 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0816 10:01:48.452505    3758 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 10:01:48.474398    3758 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1276/.minikube
	I0816 10:01:48.495459    3758 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 10:01:48.520120    3758 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 10:01:48.550379    3758 out.go:177] * Using the hyperkit driver based on user configuration
	I0816 10:01:48.592331    3758 start.go:297] selected driver: hyperkit
	I0816 10:01:48.592358    3758 start.go:901] validating driver "hyperkit" against <nil>
	I0816 10:01:48.592382    3758 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 10:01:48.596757    3758 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:01:48.596907    3758 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19461-1276/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0816 10:01:48.605545    3758 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0816 10:01:48.609748    3758 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:01:48.609772    3758 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0816 10:01:48.609805    3758 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 10:01:48.610046    3758 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 10:01:48.610094    3758 cni.go:84] Creating CNI manager for ""
	I0816 10:01:48.610103    3758 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0816 10:01:48.610108    3758 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0816 10:01:48.610236    3758 start.go:340] cluster config:
	{Name:ha-286000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-286000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docke
r CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0816 10:01:48.610323    3758 iso.go:125] acquiring lock: {Name:mkb430b43b5e7a8a683454482519837ed996c78d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:01:48.632334    3758 out.go:177] * Starting "ha-286000" primary control-plane node in "ha-286000" cluster
	I0816 10:01:48.653456    3758 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 10:01:48.653537    3758 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0816 10:01:48.653563    3758 cache.go:56] Caching tarball of preloaded images
	I0816 10:01:48.653777    3758 preload.go:172] Found /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0816 10:01:48.653813    3758 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 10:01:48.654332    3758 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:01:48.654370    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json: {Name:mk79fdae2e45cdfc987caaf16c5f211150e90dc5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:01:48.654995    3758 start.go:360] acquireMachinesLock for ha-286000: {Name:mk0510f0432b1e24fd07d899155e85dff8756d5a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:01:48.655106    3758 start.go:364] duration metric: took 87.458µs to acquireMachinesLock for "ha-286000"
	I0816 10:01:48.655154    3758 start.go:93] Provisioning new machine with config: &{Name:ha-286000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:ha-286000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:01:48.655246    3758 start.go:125] createHost starting for "" (driver="hyperkit")
	I0816 10:01:48.677450    3758 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0816 10:01:48.677701    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:01:48.677786    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:01:48.687593    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51004
	I0816 10:01:48.687935    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:01:48.688347    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:01:48.688357    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:01:48.688562    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:01:48.688668    3758 main.go:141] libmachine: (ha-286000) Calling .GetMachineName
	I0816 10:01:48.688765    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:01:48.688895    3758 start.go:159] libmachine.API.Create for "ha-286000" (driver="hyperkit")
	I0816 10:01:48.688916    3758 client.go:168] LocalClient.Create starting
	I0816 10:01:48.688948    3758 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem
	I0816 10:01:48.688999    3758 main.go:141] libmachine: Decoding PEM data...
	I0816 10:01:48.689012    3758 main.go:141] libmachine: Parsing certificate...
	I0816 10:01:48.689063    3758 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem
	I0816 10:01:48.689100    3758 main.go:141] libmachine: Decoding PEM data...
	I0816 10:01:48.689111    3758 main.go:141] libmachine: Parsing certificate...
	I0816 10:01:48.689128    3758 main.go:141] libmachine: Running pre-create checks...
	I0816 10:01:48.689135    3758 main.go:141] libmachine: (ha-286000) Calling .PreCreateCheck
	I0816 10:01:48.689224    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:48.689427    3758 main.go:141] libmachine: (ha-286000) Calling .GetConfigRaw
	I0816 10:01:48.698771    3758 main.go:141] libmachine: Creating machine...
	I0816 10:01:48.698794    3758 main.go:141] libmachine: (ha-286000) Calling .Create
	I0816 10:01:48.699018    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:48.699296    3758 main.go:141] libmachine: (ha-286000) DBG | I0816 10:01:48.699015    3766 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19461-1276/.minikube
	I0816 10:01:48.699450    3758 main.go:141] libmachine: (ha-286000) Downloading /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19461-1276/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0816 10:01:48.884950    3758 main.go:141] libmachine: (ha-286000) DBG | I0816 10:01:48.884833    3766 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa...
	I0816 10:01:48.986348    3758 main.go:141] libmachine: (ha-286000) DBG | I0816 10:01:48.986275    3766 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/ha-286000.rawdisk...
	I0816 10:01:48.986388    3758 main.go:141] libmachine: (ha-286000) DBG | Writing magic tar header
	I0816 10:01:48.986396    3758 main.go:141] libmachine: (ha-286000) DBG | Writing SSH key tar header
	I0816 10:01:48.987038    3758 main.go:141] libmachine: (ha-286000) DBG | I0816 10:01:48.987004    3766 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000 ...
	I0816 10:01:49.360700    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:49.360726    3758 main.go:141] libmachine: (ha-286000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/hyperkit.pid
	I0816 10:01:49.360741    3758 main.go:141] libmachine: (ha-286000) DBG | Using UUID ad96de67-e238-408c-89eb-d74e5b68d297
	I0816 10:01:49.471852    3758 main.go:141] libmachine: (ha-286000) DBG | Generated MAC 66:c8:48:4e:12:1b
	I0816 10:01:49.471873    3758 main.go:141] libmachine: (ha-286000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000
	I0816 10:01:49.471908    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"ad96de67-e238-408c-89eb-d74e5b68d297", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0000b2330)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 10:01:49.471951    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"ad96de67-e238-408c-89eb-d74e5b68d297", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0000b2330)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 10:01:49.471996    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "ad96de67-e238-408c-89eb-d74e5b68d297", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/ha-286000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/bzimage,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000"}
	I0816 10:01:49.472043    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U ad96de67-e238-408c-89eb-d74e5b68d297 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/ha-286000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/console-ring -f kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/bzimage,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000"
	I0816 10:01:49.472063    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0816 10:01:49.475179    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 DEBUG: hyperkit: Pid is 3771
	I0816 10:01:49.475583    3758 main.go:141] libmachine: (ha-286000) DBG | Attempt 0
	I0816 10:01:49.475597    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:49.475674    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:01:49.476510    3758 main.go:141] libmachine: (ha-286000) DBG | Searching for 66:c8:48:4e:12:1b in /var/db/dhcpd_leases ...
	I0816 10:01:49.476583    3758 main.go:141] libmachine: (ha-286000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0816 10:01:49.476592    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:01:49.476602    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:01:49.476612    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:01:49.482826    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0816 10:01:49.534430    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0816 10:01:49.535040    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:01:49.535056    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:01:49.535064    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:01:49.535072    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:01:49.909510    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0816 10:01:49.909527    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0816 10:01:50.024439    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:01:50.024459    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:01:50.024479    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:01:50.024494    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:01:50.025363    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:50 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0816 10:01:50.025375    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:50 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0816 10:01:51.477573    3758 main.go:141] libmachine: (ha-286000) DBG | Attempt 1
	I0816 10:01:51.477587    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:51.477660    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:01:51.478481    3758 main.go:141] libmachine: (ha-286000) DBG | Searching for 66:c8:48:4e:12:1b in /var/db/dhcpd_leases ...
	I0816 10:01:51.478504    3758 main.go:141] libmachine: (ha-286000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0816 10:01:51.478511    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:01:51.478520    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:01:51.478531    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:01:53.479582    3758 main.go:141] libmachine: (ha-286000) DBG | Attempt 2
	I0816 10:01:53.479599    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:53.479760    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:01:53.480630    3758 main.go:141] libmachine: (ha-286000) DBG | Searching for 66:c8:48:4e:12:1b in /var/db/dhcpd_leases ...
	I0816 10:01:53.480678    3758 main.go:141] libmachine: (ha-286000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0816 10:01:53.480688    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:01:53.480697    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:01:53.480710    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:01:55.482614    3758 main.go:141] libmachine: (ha-286000) DBG | Attempt 3
	I0816 10:01:55.482630    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:55.482703    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:01:55.483616    3758 main.go:141] libmachine: (ha-286000) DBG | Searching for 66:c8:48:4e:12:1b in /var/db/dhcpd_leases ...
	I0816 10:01:55.483662    3758 main.go:141] libmachine: (ha-286000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0816 10:01:55.483672    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:01:55.483679    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:01:55.483687    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:01:55.581983    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:55 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0816 10:01:55.582063    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:55 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0816 10:01:55.582075    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:55 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0816 10:01:55.606414    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:55 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0816 10:01:57.484306    3758 main.go:141] libmachine: (ha-286000) DBG | Attempt 4
	I0816 10:01:57.484322    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:57.484414    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:01:57.485196    3758 main.go:141] libmachine: (ha-286000) DBG | Searching for 66:c8:48:4e:12:1b in /var/db/dhcpd_leases ...
	I0816 10:01:57.485261    3758 main.go:141] libmachine: (ha-286000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0816 10:01:57.485273    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:01:57.485286    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:01:57.485294    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:01:59.485978    3758 main.go:141] libmachine: (ha-286000) DBG | Attempt 5
	I0816 10:01:59.486008    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:59.486192    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:01:59.487610    3758 main.go:141] libmachine: (ha-286000) DBG | Searching for 66:c8:48:4e:12:1b in /var/db/dhcpd_leases ...
	I0816 10:01:59.487709    3758 main.go:141] libmachine: (ha-286000) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0816 10:01:59.487730    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:01:59.487784    3758 main.go:141] libmachine: (ha-286000) DBG | Found match: 66:c8:48:4e:12:1b
	I0816 10:01:59.487797    3758 main.go:141] libmachine: (ha-286000) DBG | IP: 192.169.0.5
	I0816 10:01:59.487831    3758 main.go:141] libmachine: (ha-286000) Calling .GetConfigRaw
	I0816 10:01:59.488860    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:01:59.489069    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:01:59.489276    3758 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0816 10:01:59.489289    3758 main.go:141] libmachine: (ha-286000) Calling .GetState
	I0816 10:01:59.489463    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:59.489567    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:01:59.490615    3758 main.go:141] libmachine: Detecting operating system of created instance...
	I0816 10:01:59.490628    3758 main.go:141] libmachine: Waiting for SSH to be available...
	I0816 10:01:59.490644    3758 main.go:141] libmachine: Getting to WaitForSSH function...
	I0816 10:01:59.490652    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:01:59.490782    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:01:59.490923    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:01:59.491055    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:01:59.491190    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:01:59.491362    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:01:59.491595    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:01:59.491605    3758 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0816 10:01:59.511220    3758 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0816 10:02:02.573095    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 10:02:02.573108    3758 main.go:141] libmachine: Detecting the provisioner...
	I0816 10:02:02.573113    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:02.573255    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:02.573385    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:02.573489    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:02.573602    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:02.573753    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:02.573906    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:02:02.573914    3758 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0816 10:02:02.632510    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0816 10:02:02.632561    3758 main.go:141] libmachine: found compatible host: buildroot
	I0816 10:02:02.632568    3758 main.go:141] libmachine: Provisioning with buildroot...
	I0816 10:02:02.632573    3758 main.go:141] libmachine: (ha-286000) Calling .GetMachineName
	I0816 10:02:02.632721    3758 buildroot.go:166] provisioning hostname "ha-286000"
	I0816 10:02:02.632732    3758 main.go:141] libmachine: (ha-286000) Calling .GetMachineName
	I0816 10:02:02.632834    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:02.632956    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:02.633046    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:02.633122    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:02.633236    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:02.633363    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:02.633492    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:02:02.633500    3758 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-286000 && echo "ha-286000" | sudo tee /etc/hostname
	I0816 10:02:02.702336    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-286000
	
	I0816 10:02:02.702354    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:02.702482    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:02.702569    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:02.702670    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:02.702756    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:02.702893    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:02.703040    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:02:02.703052    3758 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-286000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-286000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-286000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 10:02:02.766997    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 10:02:02.767016    3758 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19461-1276/.minikube CaCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19461-1276/.minikube}
	I0816 10:02:02.767026    3758 buildroot.go:174] setting up certificates
	I0816 10:02:02.767038    3758 provision.go:84] configureAuth start
	I0816 10:02:02.767044    3758 main.go:141] libmachine: (ha-286000) Calling .GetMachineName
	I0816 10:02:02.767175    3758 main.go:141] libmachine: (ha-286000) Calling .GetIP
	I0816 10:02:02.767270    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:02.767372    3758 provision.go:143] copyHostCerts
	I0816 10:02:02.767406    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem
	I0816 10:02:02.767476    3758 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem, removing ...
	I0816 10:02:02.767485    3758 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem
	I0816 10:02:02.767630    3758 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem (1679 bytes)
	I0816 10:02:02.767837    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem
	I0816 10:02:02.767868    3758 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem, removing ...
	I0816 10:02:02.767873    3758 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem
	I0816 10:02:02.767991    3758 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem (1078 bytes)
	I0816 10:02:02.768146    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem
	I0816 10:02:02.768179    3758 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem, removing ...
	I0816 10:02:02.768184    3758 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem
	I0816 10:02:02.768268    3758 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem (1123 bytes)
	I0816 10:02:02.768410    3758 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem org=jenkins.ha-286000 san=[127.0.0.1 192.169.0.5 ha-286000 localhost minikube]
	I0816 10:02:02.915225    3758 provision.go:177] copyRemoteCerts
	I0816 10:02:02.915279    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 10:02:02.915299    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:02.915444    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:02.915558    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:02.915640    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:02.915723    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:02:02.953069    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0816 10:02:02.953145    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0816 10:02:02.973516    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0816 10:02:02.973600    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0816 10:02:02.993038    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0816 10:02:02.993100    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 10:02:03.013565    3758 provision.go:87] duration metric: took 246.514857ms to configureAuth
	I0816 10:02:03.013581    3758 buildroot.go:189] setting minikube options for container-runtime
	I0816 10:02:03.013724    3758 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:02:03.013737    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:03.013889    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:03.013978    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:03.014067    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:03.014147    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:03.014250    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:03.014369    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:03.014498    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:02:03.014506    3758 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0816 10:02:03.073645    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0816 10:02:03.073660    3758 buildroot.go:70] root file system type: tmpfs
	I0816 10:02:03.073734    3758 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0816 10:02:03.073745    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:03.073872    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:03.073966    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:03.074063    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:03.074164    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:03.074292    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:03.074429    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:02:03.074479    3758 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0816 10:02:03.148891    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0816 10:02:03.148912    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:03.149052    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:03.149138    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:03.149235    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:03.149324    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:03.149466    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:03.149604    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:02:03.149616    3758 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0816 10:02:04.780498    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0816 10:02:04.780519    3758 main.go:141] libmachine: Checking connection to Docker...
	I0816 10:02:04.780526    3758 main.go:141] libmachine: (ha-286000) Calling .GetURL
	I0816 10:02:04.780691    3758 main.go:141] libmachine: Docker is up and running!
	I0816 10:02:04.780698    3758 main.go:141] libmachine: Reticulating splines...
	I0816 10:02:04.780702    3758 client.go:171] duration metric: took 16.091876944s to LocalClient.Create
	I0816 10:02:04.780713    3758 start.go:167] duration metric: took 16.091916939s to libmachine.API.Create "ha-286000"
	I0816 10:02:04.780722    3758 start.go:293] postStartSetup for "ha-286000" (driver="hyperkit")
	I0816 10:02:04.780729    3758 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 10:02:04.780738    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:04.780873    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 10:02:04.780884    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:04.780956    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:04.781047    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:04.781154    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:04.781275    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:02:04.819650    3758 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 10:02:04.823314    3758 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 10:02:04.823335    3758 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19461-1276/.minikube/addons for local assets ...
	I0816 10:02:04.823443    3758 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19461-1276/.minikube/files for local assets ...
	I0816 10:02:04.823624    3758 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> 18312.pem in /etc/ssl/certs
	I0816 10:02:04.823631    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> /etc/ssl/certs/18312.pem
	I0816 10:02:04.823837    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 10:02:04.831860    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem --> /etc/ssl/certs/18312.pem (1708 bytes)
	I0816 10:02:04.850901    3758 start.go:296] duration metric: took 70.172878ms for postStartSetup
	I0816 10:02:04.850935    3758 main.go:141] libmachine: (ha-286000) Calling .GetConfigRaw
	I0816 10:02:04.851561    3758 main.go:141] libmachine: (ha-286000) Calling .GetIP
	I0816 10:02:04.851702    3758 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:02:04.852053    3758 start.go:128] duration metric: took 16.196888252s to createHost
	I0816 10:02:04.852071    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:04.852167    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:04.852261    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:04.852344    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:04.852420    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:04.852525    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:04.852648    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:02:04.852655    3758 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 10:02:04.911299    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723827724.986438448
	
	I0816 10:02:04.911317    3758 fix.go:216] guest clock: 1723827724.986438448
	I0816 10:02:04.911322    3758 fix.go:229] Guest: 2024-08-16 10:02:04.986438448 -0700 PDT Remote: 2024-08-16 10:02:04.852061 -0700 PDT m=+16.776125584 (delta=134.377448ms)
	I0816 10:02:04.911341    3758 fix.go:200] guest clock delta is within tolerance: 134.377448ms
	I0816 10:02:04.911345    3758 start.go:83] releasing machines lock for "ha-286000", held for 16.256325696s
	I0816 10:02:04.911364    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:04.911498    3758 main.go:141] libmachine: (ha-286000) Calling .GetIP
	I0816 10:02:04.911592    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:04.911942    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:04.912068    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:04.912144    3758 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 10:02:04.912171    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:04.912218    3758 ssh_runner.go:195] Run: cat /version.json
	I0816 10:02:04.912231    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:04.912262    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:04.912305    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:04.912360    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:04.912384    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:04.912437    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:04.912464    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:04.912547    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:02:04.912567    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:02:04.945688    3758 ssh_runner.go:195] Run: systemctl --version
	I0816 10:02:04.997158    3758 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 10:02:05.002278    3758 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 10:02:05.002315    3758 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 10:02:05.015089    3758 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 10:02:05.015098    3758 start.go:495] detecting cgroup driver to use...
	I0816 10:02:05.015195    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 10:02:05.030338    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0816 10:02:05.038588    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0816 10:02:05.046898    3758 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0816 10:02:05.046932    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0816 10:02:05.055211    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 10:02:05.063533    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0816 10:02:05.073244    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 10:02:05.081895    3758 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 10:02:05.090915    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0816 10:02:05.099773    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0816 10:02:05.108719    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0816 10:02:05.117772    3758 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 10:02:05.125574    3758 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 10:02:05.145403    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:05.261568    3758 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0816 10:02:05.278920    3758 start.go:495] detecting cgroup driver to use...
	I0816 10:02:05.278996    3758 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0816 10:02:05.293925    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 10:02:05.306704    3758 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 10:02:05.320000    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 10:02:05.330230    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 10:02:05.340255    3758 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0816 10:02:05.369098    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 10:02:05.379374    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 10:02:05.395120    3758 ssh_runner.go:195] Run: which cri-dockerd
	I0816 10:02:05.397921    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0816 10:02:05.404866    3758 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0816 10:02:05.417935    3758 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0816 10:02:05.514793    3758 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0816 10:02:05.626208    3758 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0816 10:02:05.626292    3758 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0816 10:02:05.640317    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:05.737978    3758 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0816 10:02:08.105430    3758 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.367458496s)
	I0816 10:02:08.105491    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0816 10:02:08.115970    3758 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0816 10:02:08.129325    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0816 10:02:08.140312    3758 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0816 10:02:08.237628    3758 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0816 10:02:08.340285    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:08.436168    3758 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0816 10:02:08.457808    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0816 10:02:08.468314    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:08.561830    3758 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0816 10:02:08.620080    3758 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0816 10:02:08.620164    3758 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0816 10:02:08.624716    3758 start.go:563] Will wait 60s for crictl version
	I0816 10:02:08.624764    3758 ssh_runner.go:195] Run: which crictl
	I0816 10:02:08.627806    3758 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 10:02:08.660437    3758 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0816 10:02:08.660505    3758 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0816 10:02:08.678073    3758 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0816 10:02:08.722165    3758 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0816 10:02:08.722217    3758 main.go:141] libmachine: (ha-286000) Calling .GetIP
	I0816 10:02:08.722605    3758 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0816 10:02:08.727140    3758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 10:02:08.736769    3758 kubeadm.go:883] updating cluster {Name:ha-286000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:ha-286000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 10:02:08.736830    3758 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 10:02:08.736887    3758 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0816 10:02:08.748223    3758 docker.go:685] Got preloaded images: 
	I0816 10:02:08.748236    3758 docker.go:691] registry.k8s.io/kube-apiserver:v1.31.0 wasn't preloaded
	I0816 10:02:08.748294    3758 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0816 10:02:08.755601    3758 ssh_runner.go:195] Run: which lz4
	I0816 10:02:08.758510    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0816 10:02:08.758630    3758 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 10:02:08.761594    3758 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 10:02:08.761610    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (342554258 bytes)
	I0816 10:02:09.771496    3758 docker.go:649] duration metric: took 1.012922149s to copy over tarball
	I0816 10:02:09.771559    3758 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 10:02:12.050040    3758 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.27849665s)
	I0816 10:02:12.050054    3758 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 10:02:12.075438    3758 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0816 10:02:12.083331    3758 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0816 10:02:12.097261    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:12.204348    3758 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0816 10:02:14.516272    3758 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.311936705s)
	I0816 10:02:14.516363    3758 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0816 10:02:14.529500    3758 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0816 10:02:14.529519    3758 cache_images.go:84] Images are preloaded, skipping loading
	I0816 10:02:14.529530    3758 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.0 docker true true} ...
	I0816 10:02:14.529607    3758 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-286000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-286000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 10:02:14.529674    3758 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0816 10:02:14.568093    3758 cni.go:84] Creating CNI manager for ""
	I0816 10:02:14.568107    3758 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0816 10:02:14.568120    3758 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 10:02:14.568134    3758 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-286000 NodeName:ha-286000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 10:02:14.568222    3758 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-286000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 10:02:14.568237    3758 kube-vip.go:115] generating kube-vip config ...
	I0816 10:02:14.568286    3758 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0816 10:02:14.580706    3758 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0816 10:02:14.580778    3758 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0816 10:02:14.580832    3758 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 10:02:14.588124    3758 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 10:02:14.588174    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0816 10:02:14.595283    3758 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0816 10:02:14.608721    3758 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 10:02:14.622036    3758 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0816 10:02:14.635525    3758 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1446 bytes)
	I0816 10:02:14.648868    3758 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0816 10:02:14.651748    3758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 10:02:14.661305    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:14.752561    3758 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 10:02:14.768967    3758 certs.go:68] Setting up /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000 for IP: 192.169.0.5
	I0816 10:02:14.768980    3758 certs.go:194] generating shared ca certs ...
	I0816 10:02:14.768991    3758 certs.go:226] acquiring lock for ca certs: {Name:mka549fbc467849834d2267da204028c49d88a3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:14.769183    3758 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key
	I0816 10:02:14.769258    3758 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key
	I0816 10:02:14.769269    3758 certs.go:256] generating profile certs ...
	I0816 10:02:14.769315    3758 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.key
	I0816 10:02:14.769328    3758 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.crt with IP's: []
	I0816 10:02:14.849573    3758 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.crt ...
	I0816 10:02:14.849589    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.crt: {Name:mk9c6a2b5871e8d33f12e0a58268b028ea37ab4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:14.849911    3758 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.key ...
	I0816 10:02:14.849919    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.key: {Name:mk7d8ed264e583d7ced6b16b60c72c11b15a1f22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:14.850137    3758 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.af99fd6a
	I0816 10:02:14.850151    3758 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.af99fd6a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.254]
	I0816 10:02:14.968129    3758 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.af99fd6a ...
	I0816 10:02:14.968143    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.af99fd6a: {Name:mk3b8945a16852dca8274ec1ab3a9f2eade80831 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:14.968464    3758 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.af99fd6a ...
	I0816 10:02:14.968473    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.af99fd6a: {Name:mk90fcce3348a214c4733fe5a1fb806faff7773f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:14.968717    3758 certs.go:381] copying /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.af99fd6a -> /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt
	I0816 10:02:14.968940    3758 certs.go:385] copying /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.af99fd6a -> /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key
	I0816 10:02:14.969171    3758 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key
	I0816 10:02:14.969185    3758 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.crt with IP's: []
	I0816 10:02:15.029329    3758 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.crt ...
	I0816 10:02:15.029338    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.crt: {Name:mk70aaa3903fb58df2124d779dbea07aa95769f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:15.029604    3758 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key ...
	I0816 10:02:15.029611    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key: {Name:mka3a85354927e8da31f9dfc67f92dea3c77ca65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:15.029850    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0816 10:02:15.029876    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0816 10:02:15.029898    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0816 10:02:15.029940    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0816 10:02:15.029958    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0816 10:02:15.029990    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0816 10:02:15.030008    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0816 10:02:15.030025    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0816 10:02:15.030118    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem (1338 bytes)
	W0816 10:02:15.030163    3758 certs.go:480] ignoring /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831_empty.pem, impossibly tiny 0 bytes
	I0816 10:02:15.030171    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 10:02:15.030240    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem (1078 bytes)
	I0816 10:02:15.030268    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem (1123 bytes)
	I0816 10:02:15.030354    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem (1679 bytes)
	I0816 10:02:15.030467    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem (1708 bytes)
	I0816 10:02:15.030499    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:02:15.030525    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem -> /usr/share/ca-certificates/1831.pem
	I0816 10:02:15.030542    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> /usr/share/ca-certificates/18312.pem
	I0816 10:02:15.031030    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 10:02:15.051618    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0816 10:02:15.071318    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 10:02:15.091017    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 10:02:15.110497    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0816 10:02:15.131033    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 10:02:15.150747    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 10:02:15.170186    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 10:02:15.189808    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 10:02:15.209868    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem --> /usr/share/ca-certificates/1831.pem (1338 bytes)
	I0816 10:02:15.229378    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem --> /usr/share/ca-certificates/18312.pem (1708 bytes)
	I0816 10:02:15.248667    3758 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 10:02:15.262035    3758 ssh_runner.go:195] Run: openssl version
	I0816 10:02:15.266135    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 10:02:15.275959    3758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:02:15.279367    3758 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 16:48 /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:02:15.279403    3758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:02:15.283611    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 10:02:15.291872    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1831.pem && ln -fs /usr/share/ca-certificates/1831.pem /etc/ssl/certs/1831.pem"
	I0816 10:02:15.299890    3758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1831.pem
	I0816 10:02:15.303179    3758 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 16:57 /usr/share/ca-certificates/1831.pem
	I0816 10:02:15.303212    3758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1831.pem
	I0816 10:02:15.307423    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1831.pem /etc/ssl/certs/51391683.0"
	I0816 10:02:15.315741    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18312.pem && ln -fs /usr/share/ca-certificates/18312.pem /etc/ssl/certs/18312.pem"
	I0816 10:02:15.324088    3758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18312.pem
	I0816 10:02:15.327441    3758 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 16:57 /usr/share/ca-certificates/18312.pem
	I0816 10:02:15.327479    3758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18312.pem
	I0816 10:02:15.331723    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18312.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 10:02:15.339921    3758 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 10:02:15.343061    3758 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0816 10:02:15.343109    3758 kubeadm.go:392] StartCluster: {Name:ha-286000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:ha-286000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 10:02:15.343194    3758 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0816 10:02:15.356016    3758 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 10:02:15.363405    3758 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 10:02:15.370635    3758 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 10:02:15.377806    3758 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 10:02:15.377819    3758 kubeadm.go:157] found existing configuration files:
	
	I0816 10:02:15.377855    3758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 10:02:15.384868    3758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 10:02:15.384903    3758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 10:02:15.392001    3758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 10:02:15.398935    3758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 10:02:15.398971    3758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 10:02:15.406181    3758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 10:02:15.413108    3758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 10:02:15.413142    3758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 10:02:15.422603    3758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 10:02:15.435692    3758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 10:02:15.435749    3758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 10:02:15.450920    3758 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 10:02:15.521417    3758 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0816 10:02:15.521501    3758 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 10:02:15.590843    3758 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 10:02:15.590923    3758 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 10:02:15.591008    3758 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0816 10:02:15.600874    3758 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 10:02:15.632377    3758 out.go:235]   - Generating certificates and keys ...
	I0816 10:02:15.632427    3758 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 10:02:15.632475    3758 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 10:02:15.791885    3758 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0816 10:02:15.852803    3758 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0816 10:02:15.961982    3758 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0816 10:02:16.146557    3758 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0816 10:02:16.493401    3758 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0816 10:02:16.493556    3758 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-286000 localhost] and IPs [192.169.0.5 127.0.0.1 ::1]
	I0816 10:02:16.704832    3758 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0816 10:02:16.705108    3758 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-286000 localhost] and IPs [192.169.0.5 127.0.0.1 ::1]
	I0816 10:02:16.922818    3758 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0816 10:02:17.082810    3758 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0816 10:02:17.393939    3758 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0816 10:02:17.394070    3758 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 10:02:17.465159    3758 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 10:02:17.731984    3758 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0816 10:02:18.019833    3758 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 10:02:18.164168    3758 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 10:02:18.273756    3758 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 10:02:18.274215    3758 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 10:02:18.275949    3758 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 10:02:18.300490    3758 out.go:235]   - Booting up control plane ...
	I0816 10:02:18.300569    3758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 10:02:18.300638    3758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 10:02:18.300693    3758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 10:02:18.300780    3758 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 10:02:18.300850    3758 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 10:02:18.300879    3758 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 10:02:18.405245    3758 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0816 10:02:18.405330    3758 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0816 10:02:18.909805    3758 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 504.760374ms
	I0816 10:02:18.909928    3758 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0816 10:02:24.844637    3758 kubeadm.go:310] [api-check] The API server is healthy after 5.938882642s
	I0816 10:02:24.854864    3758 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0816 10:02:24.862134    3758 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0816 10:02:24.890940    3758 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0816 10:02:24.891101    3758 kubeadm.go:310] [mark-control-plane] Marking the node ha-286000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0816 10:02:24.901441    3758 kubeadm.go:310] [bootstrap-token] Using token: 73merd.1elxantqs1p5mkiz
	I0816 10:02:24.939545    3758 out.go:235]   - Configuring RBAC rules ...
	I0816 10:02:24.939737    3758 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0816 10:02:24.944670    3758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0816 10:02:24.951393    3758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0816 10:02:24.953383    3758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0816 10:02:24.955899    3758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0816 10:02:24.958387    3758 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0816 10:02:25.251799    3758 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0816 10:02:25.676108    3758 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0816 10:02:26.249233    3758 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0816 10:02:26.249936    3758 kubeadm.go:310] 
	I0816 10:02:26.250001    3758 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0816 10:02:26.250014    3758 kubeadm.go:310] 
	I0816 10:02:26.250090    3758 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0816 10:02:26.250102    3758 kubeadm.go:310] 
	I0816 10:02:26.250122    3758 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0816 10:02:26.250175    3758 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0816 10:02:26.250221    3758 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0816 10:02:26.250229    3758 kubeadm.go:310] 
	I0816 10:02:26.250268    3758 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0816 10:02:26.250272    3758 kubeadm.go:310] 
	I0816 10:02:26.250305    3758 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0816 10:02:26.250313    3758 kubeadm.go:310] 
	I0816 10:02:26.250359    3758 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0816 10:02:26.250425    3758 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0816 10:02:26.250484    3758 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0816 10:02:26.250491    3758 kubeadm.go:310] 
	I0816 10:02:26.250569    3758 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0816 10:02:26.250648    3758 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0816 10:02:26.250656    3758 kubeadm.go:310] 
	I0816 10:02:26.250716    3758 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 73merd.1elxantqs1p5mkiz \
	I0816 10:02:26.250793    3758 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:995590413ea712c4f7db2cf849d28264ebc291e6cd53d741ce1d22c1daaa1602 \
	I0816 10:02:26.250811    3758 kubeadm.go:310] 	--control-plane 
	I0816 10:02:26.250817    3758 kubeadm.go:310] 
	I0816 10:02:26.250879    3758 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0816 10:02:26.250885    3758 kubeadm.go:310] 
	I0816 10:02:26.250947    3758 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 73merd.1elxantqs1p5mkiz \
	I0816 10:02:26.251036    3758 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:995590413ea712c4f7db2cf849d28264ebc291e6cd53d741ce1d22c1daaa1602 
	I0816 10:02:26.251297    3758 kubeadm.go:310] W0816 17:02:15.599099    1566 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 10:02:26.251521    3758 kubeadm.go:310] W0816 17:02:15.600578    1566 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 10:02:26.251608    3758 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 10:02:26.251620    3758 cni.go:84] Creating CNI manager for ""
	I0816 10:02:26.251624    3758 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0816 10:02:26.289324    3758 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0816 10:02:26.363318    3758 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0816 10:02:26.368971    3758 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0816 10:02:26.368983    3758 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0816 10:02:26.382855    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0816 10:02:26.629869    3758 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 10:02:26.629947    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-286000 minikube.k8s.io/updated_at=2024_08_16T10_02_26_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd minikube.k8s.io/name=ha-286000 minikube.k8s.io/primary=true
	I0816 10:02:26.629949    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 10:02:26.658712    3758 ops.go:34] apiserver oom_adj: -16
	I0816 10:02:26.756402    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 10:02:27.257667    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 10:02:27.757259    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 10:02:28.256864    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 10:02:28.757602    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 10:02:29.256802    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 10:02:29.757282    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 10:02:29.882342    3758 kubeadm.go:1113] duration metric: took 3.252511045s to wait for elevateKubeSystemPrivileges
	I0816 10:02:29.882365    3758 kubeadm.go:394] duration metric: took 14.539506386s to StartCluster
	I0816 10:02:29.882380    3758 settings.go:142] acquiring lock: {Name:mk60e377244eac756871b74b6bd45115654652a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:29.882471    3758 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19461-1276/kubeconfig
	I0816 10:02:29.882922    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/kubeconfig: {Name:mk7f4491c8ec5cf96c96a5a1a5dbfba4301394a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:29.883167    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0816 10:02:29.883170    3758 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:02:29.883182    3758 start.go:241] waiting for startup goroutines ...
	I0816 10:02:29.883204    3758 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 10:02:29.883238    3758 addons.go:69] Setting storage-provisioner=true in profile "ha-286000"
	I0816 10:02:29.883244    3758 addons.go:69] Setting default-storageclass=true in profile "ha-286000"
	I0816 10:02:29.883261    3758 addons.go:234] Setting addon storage-provisioner=true in "ha-286000"
	I0816 10:02:29.883278    3758 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-286000"
	I0816 10:02:29.883280    3758 host.go:66] Checking if "ha-286000" exists ...
	I0816 10:02:29.883305    3758 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:02:29.883558    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:02:29.883573    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:02:29.883575    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:02:29.883598    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:02:29.892969    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51028
	I0816 10:02:29.893238    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51030
	I0816 10:02:29.893356    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:02:29.893605    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:02:29.893730    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:02:29.893741    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:02:29.893938    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:02:29.893951    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:02:29.893976    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:02:29.894142    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:02:29.894254    3758 main.go:141] libmachine: (ha-286000) Calling .GetState
	I0816 10:02:29.894338    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:29.894365    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:02:29.894397    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:02:29.894424    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:02:29.896690    3758 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19461-1276/kubeconfig
	I0816 10:02:29.896968    3758 kapi.go:59] client config for ha-286000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.key", CAFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xa202f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0816 10:02:29.897373    3758 cert_rotation.go:140] Starting client certificate rotation controller
	I0816 10:02:29.897530    3758 addons.go:234] Setting addon default-storageclass=true in "ha-286000"
	I0816 10:02:29.897550    3758 host.go:66] Checking if "ha-286000" exists ...
	I0816 10:02:29.897771    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:02:29.897789    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:02:29.903669    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51032
	I0816 10:02:29.904077    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:02:29.904431    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:02:29.904451    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:02:29.904736    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:02:29.904889    3758 main.go:141] libmachine: (ha-286000) Calling .GetState
	I0816 10:02:29.904993    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:29.905054    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:02:29.906086    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:29.906730    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51034
	I0816 10:02:29.907081    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:02:29.907463    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:02:29.907491    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:02:29.907694    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:02:29.908045    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:02:29.908061    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:02:29.917300    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51036
	I0816 10:02:29.917667    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:02:29.918020    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:02:29.918034    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:02:29.918277    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:02:29.918384    3758 main.go:141] libmachine: (ha-286000) Calling .GetState
	I0816 10:02:29.918483    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:29.918572    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:02:29.919534    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:29.919671    3758 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 10:02:29.919680    3758 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 10:02:29.919688    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:29.919789    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:29.919896    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:29.919990    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:29.920072    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:02:29.928735    3758 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 10:02:29.965711    3758 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 10:02:29.965725    3758 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 10:02:29.965744    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:29.965926    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:29.966043    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:29.966151    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:29.966280    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:02:30.033245    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.169.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0816 10:02:30.037035    3758 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 10:02:30.090040    3758 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 10:02:30.396267    3758 start.go:971] {"host.minikube.internal": 192.169.0.1} host record injected into CoreDNS's ConfigMap
	I0816 10:02:30.396311    3758 main.go:141] libmachine: Making call to close driver server
	I0816 10:02:30.396323    3758 main.go:141] libmachine: (ha-286000) Calling .Close
	I0816 10:02:30.396482    3758 main.go:141] libmachine: Successfully made call to close driver server
	I0816 10:02:30.396488    3758 main.go:141] libmachine: (ha-286000) DBG | Closing plugin on server side
	I0816 10:02:30.396492    3758 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 10:02:30.396501    3758 main.go:141] libmachine: Making call to close driver server
	I0816 10:02:30.396507    3758 main.go:141] libmachine: (ha-286000) Calling .Close
	I0816 10:02:30.396655    3758 main.go:141] libmachine: Successfully made call to close driver server
	I0816 10:02:30.396662    3758 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 10:02:30.396713    3758 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0816 10:02:30.396725    3758 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0816 10:02:30.396804    3758 round_trippers.go:463] GET https://192.169.0.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0816 10:02:30.396810    3758 round_trippers.go:469] Request Headers:
	I0816 10:02:30.396817    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:02:30.396822    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:02:30.402286    3758 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0816 10:02:30.402706    3758 round_trippers.go:463] PUT https://192.169.0.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0816 10:02:30.402713    3758 round_trippers.go:469] Request Headers:
	I0816 10:02:30.402718    3758 round_trippers.go:473]     Content-Type: application/json
	I0816 10:02:30.402721    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:02:30.402723    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:02:30.404290    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:02:30.404403    3758 main.go:141] libmachine: Making call to close driver server
	I0816 10:02:30.404416    3758 main.go:141] libmachine: (ha-286000) Calling .Close
	I0816 10:02:30.404565    3758 main.go:141] libmachine: Successfully made call to close driver server
	I0816 10:02:30.404575    3758 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 10:02:30.404586    3758 main.go:141] libmachine: (ha-286000) DBG | Closing plugin on server side
	I0816 10:02:30.497806    3758 main.go:141] libmachine: Making call to close driver server
	I0816 10:02:30.497818    3758 main.go:141] libmachine: (ha-286000) Calling .Close
	I0816 10:02:30.498006    3758 main.go:141] libmachine: Successfully made call to close driver server
	I0816 10:02:30.498011    3758 main.go:141] libmachine: (ha-286000) DBG | Closing plugin on server side
	I0816 10:02:30.498016    3758 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 10:02:30.498023    3758 main.go:141] libmachine: Making call to close driver server
	I0816 10:02:30.498040    3758 main.go:141] libmachine: (ha-286000) Calling .Close
	I0816 10:02:30.498160    3758 main.go:141] libmachine: Successfully made call to close driver server
	I0816 10:02:30.498168    3758 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 10:02:30.498179    3758 main.go:141] libmachine: (ha-286000) DBG | Closing plugin on server side
	I0816 10:02:30.538607    3758 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0816 10:02:30.596522    3758 addons.go:510] duration metric: took 713.336883ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0816 10:02:30.596578    3758 start.go:246] waiting for cluster config update ...
	I0816 10:02:30.596599    3758 start.go:255] writing updated cluster config ...
	I0816 10:02:30.634683    3758 out.go:201] 
	I0816 10:02:30.655754    3758 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:02:30.655861    3758 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:02:30.677634    3758 out.go:177] * Starting "ha-286000-m02" control-plane node in "ha-286000" cluster
	I0816 10:02:30.737443    3758 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 10:02:30.737475    3758 cache.go:56] Caching tarball of preloaded images
	I0816 10:02:30.737660    3758 preload.go:172] Found /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0816 10:02:30.737679    3758 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 10:02:30.737771    3758 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:02:30.738414    3758 start.go:360] acquireMachinesLock for ha-286000-m02: {Name:mk0510f0432b1e24fd07d899155e85dff8756d5a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:02:30.738494    3758 start.go:364] duration metric: took 63.585µs to acquireMachinesLock for "ha-286000-m02"
	I0816 10:02:30.738517    3758 start.go:93] Provisioning new machine with config: &{Name:ha-286000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:ha-286000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:02:30.738590    3758 start.go:125] createHost starting for "m02" (driver="hyperkit")
	I0816 10:02:30.760743    3758 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0816 10:02:30.760905    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:02:30.760935    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:02:30.771028    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51041
	I0816 10:02:30.771383    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:02:30.771745    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:02:30.771760    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:02:30.771967    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:02:30.772077    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetMachineName
	I0816 10:02:30.772157    3758 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:02:30.772261    3758 start.go:159] libmachine.API.Create for "ha-286000" (driver="hyperkit")
	I0816 10:02:30.772276    3758 client.go:168] LocalClient.Create starting
	I0816 10:02:30.772303    3758 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem
	I0816 10:02:30.772358    3758 main.go:141] libmachine: Decoding PEM data...
	I0816 10:02:30.772368    3758 main.go:141] libmachine: Parsing certificate...
	I0816 10:02:30.772413    3758 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem
	I0816 10:02:30.772450    3758 main.go:141] libmachine: Decoding PEM data...
	I0816 10:02:30.772460    3758 main.go:141] libmachine: Parsing certificate...
	I0816 10:02:30.772472    3758 main.go:141] libmachine: Running pre-create checks...
	I0816 10:02:30.772477    3758 main.go:141] libmachine: (ha-286000-m02) Calling .PreCreateCheck
	I0816 10:02:30.772545    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:30.772568    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetConfigRaw
	I0816 10:02:30.781772    3758 main.go:141] libmachine: Creating machine...
	I0816 10:02:30.781789    3758 main.go:141] libmachine: (ha-286000-m02) Calling .Create
	I0816 10:02:30.781943    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:30.782181    3758 main.go:141] libmachine: (ha-286000-m02) DBG | I0816 10:02:30.781934    3805 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19461-1276/.minikube
	I0816 10:02:30.782269    3758 main.go:141] libmachine: (ha-286000-m02) Downloading /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19461-1276/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0816 10:02:30.982082    3758 main.go:141] libmachine: (ha-286000-m02) DBG | I0816 10:02:30.982020    3805 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/id_rsa...
	I0816 10:02:31.266609    3758 main.go:141] libmachine: (ha-286000-m02) DBG | I0816 10:02:31.266514    3805 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/ha-286000-m02.rawdisk...
	I0816 10:02:31.266629    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Writing magic tar header
	I0816 10:02:31.266638    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Writing SSH key tar header
	I0816 10:02:31.267382    3758 main.go:141] libmachine: (ha-286000-m02) DBG | I0816 10:02:31.267242    3805 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02 ...
	I0816 10:02:31.642864    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:31.642889    3758 main.go:141] libmachine: (ha-286000-m02) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/hyperkit.pid
	I0816 10:02:31.642912    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Using UUID f7630301-2bc3-4935-a751-ac72999da031
	I0816 10:02:31.669349    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Generated MAC 72:69:8f:11:68:1d
	I0816 10:02:31.669369    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000
	I0816 10:02:31.669397    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"f7630301-2bc3-4935-a751-ac72999da031", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 10:02:31.669426    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"f7630301-2bc3-4935-a751-ac72999da031", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 10:02:31.669455    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "f7630301-2bc3-4935-a751-ac72999da031", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/ha-286000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/bzimage,/Users/jenkins/minikube-integration/19461-1276/.minikube/machine
s/ha-286000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000"}
	I0816 10:02:31.669484    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U f7630301-2bc3-4935-a751-ac72999da031 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/ha-286000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/bzimage,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000"
	I0816 10:02:31.669499    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0816 10:02:31.672492    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 DEBUG: hyperkit: Pid is 3806
	I0816 10:02:31.672957    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Attempt 0
	I0816 10:02:31.673000    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:31.673054    3758 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid from json: 3806
	I0816 10:02:31.674014    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Searching for 72:69:8f:11:68:1d in /var/db/dhcpd_leases ...
	I0816 10:02:31.674086    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0816 10:02:31.674098    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:02:31.674120    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:02:31.674129    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:02:31.674137    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:02:31.680025    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0816 10:02:31.688109    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0816 10:02:31.688968    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:02:31.688993    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:02:31.689006    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:02:31.689016    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:02:32.077133    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:32 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0816 10:02:32.077153    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:32 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0816 10:02:32.191789    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:02:32.191806    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:02:32.191814    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:02:32.191825    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:02:32.192676    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:32 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0816 10:02:32.192686    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:32 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0816 10:02:33.675165    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Attempt 1
	I0816 10:02:33.675183    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:33.675297    3758 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid from json: 3806
	I0816 10:02:33.676089    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Searching for 72:69:8f:11:68:1d in /var/db/dhcpd_leases ...
	I0816 10:02:33.676141    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0816 10:02:33.676153    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:02:33.676162    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:02:33.676169    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:02:33.676193    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:02:35.676266    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Attempt 2
	I0816 10:02:35.676285    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:35.676360    3758 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid from json: 3806
	I0816 10:02:35.677182    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Searching for 72:69:8f:11:68:1d in /var/db/dhcpd_leases ...
	I0816 10:02:35.677237    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0816 10:02:35.677248    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:02:35.677262    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:02:35.677270    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:02:35.677281    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:02:37.678515    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Attempt 3
	I0816 10:02:37.678532    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:37.678572    3758 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid from json: 3806
	I0816 10:02:37.679405    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Searching for 72:69:8f:11:68:1d in /var/db/dhcpd_leases ...
	I0816 10:02:37.679441    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0816 10:02:37.679451    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:02:37.679461    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:02:37.679468    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:02:37.679483    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:02:37.782496    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:37 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0816 10:02:37.782531    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:37 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0816 10:02:37.782540    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:37 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0816 10:02:37.806630    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:37 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0816 10:02:39.680161    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Attempt 4
	I0816 10:02:39.680178    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:39.680273    3758 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid from json: 3806
	I0816 10:02:39.681064    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Searching for 72:69:8f:11:68:1d in /var/db/dhcpd_leases ...
	I0816 10:02:39.681112    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0816 10:02:39.681136    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:02:39.681155    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:02:39.681170    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:02:39.681181    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:02:41.681380    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Attempt 5
	I0816 10:02:41.681414    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:41.681563    3758 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid from json: 3806
	I0816 10:02:41.682343    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Searching for 72:69:8f:11:68:1d in /var/db/dhcpd_leases ...
	I0816 10:02:41.682392    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0816 10:02:41.682406    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0d7b0}
	I0816 10:02:41.682415    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Found match: 72:69:8f:11:68:1d
	I0816 10:02:41.682421    3758 main.go:141] libmachine: (ha-286000-m02) DBG | IP: 192.169.0.6
	I0816 10:02:41.682475    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetConfigRaw
	I0816 10:02:41.683135    3758 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:02:41.683257    3758 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:02:41.683358    3758 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0816 10:02:41.683367    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetState
	I0816 10:02:41.683478    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:41.683537    3758 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid from json: 3806
	I0816 10:02:41.684414    3758 main.go:141] libmachine: Detecting operating system of created instance...
	I0816 10:02:41.684423    3758 main.go:141] libmachine: Waiting for SSH to be available...
	I0816 10:02:41.684427    3758 main.go:141] libmachine: Getting to WaitForSSH function...
	I0816 10:02:41.684431    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:41.684566    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:41.684692    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:41.684809    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:41.684943    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:41.685095    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:41.685300    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:02:41.685308    3758 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0816 10:02:42.746559    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 10:02:42.746571    3758 main.go:141] libmachine: Detecting the provisioner...
	I0816 10:02:42.746577    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:42.746714    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:42.746824    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:42.746914    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:42.746988    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:42.747112    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:42.747269    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:02:42.747277    3758 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0816 10:02:42.811630    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0816 10:02:42.811666    3758 main.go:141] libmachine: found compatible host: buildroot
	I0816 10:02:42.811672    3758 main.go:141] libmachine: Provisioning with buildroot...
	I0816 10:02:42.811681    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetMachineName
	I0816 10:02:42.811816    3758 buildroot.go:166] provisioning hostname "ha-286000-m02"
	I0816 10:02:42.811828    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetMachineName
	I0816 10:02:42.811943    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:42.812045    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:42.812136    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:42.812274    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:42.812376    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:42.812513    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:42.812673    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:02:42.812682    3758 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-286000-m02 && echo "ha-286000-m02" | sudo tee /etc/hostname
	I0816 10:02:42.887531    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-286000-m02
	
	I0816 10:02:42.887545    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:42.887676    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:42.887769    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:42.887867    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:42.887953    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:42.888075    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:42.888214    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:02:42.888225    3758 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-286000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-286000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-286000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 10:02:42.959625    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 10:02:42.959641    3758 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19461-1276/.minikube CaCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19461-1276/.minikube}
	I0816 10:02:42.959649    3758 buildroot.go:174] setting up certificates
	I0816 10:02:42.959661    3758 provision.go:84] configureAuth start
	I0816 10:02:42.959666    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetMachineName
	I0816 10:02:42.959803    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetIP
	I0816 10:02:42.959899    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:42.959980    3758 provision.go:143] copyHostCerts
	I0816 10:02:42.960007    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem
	I0816 10:02:42.960055    3758 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem, removing ...
	I0816 10:02:42.960061    3758 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem
	I0816 10:02:42.960954    3758 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem (1078 bytes)
	I0816 10:02:42.961160    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem
	I0816 10:02:42.961190    3758 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem, removing ...
	I0816 10:02:42.961195    3758 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem
	I0816 10:02:42.961273    3758 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem (1123 bytes)
	I0816 10:02:42.961439    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem
	I0816 10:02:42.961509    3758 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem, removing ...
	I0816 10:02:42.961515    3758 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem
	I0816 10:02:42.961594    3758 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem (1679 bytes)
	I0816 10:02:42.961746    3758 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem org=jenkins.ha-286000-m02 san=[127.0.0.1 192.169.0.6 ha-286000-m02 localhost minikube]
	I0816 10:02:43.093511    3758 provision.go:177] copyRemoteCerts
	I0816 10:02:43.093556    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 10:02:43.093572    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:43.093719    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:43.093818    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:43.093917    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:43.094005    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/id_rsa Username:docker}
	I0816 10:02:43.132229    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0816 10:02:43.132299    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0816 10:02:43.151814    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0816 10:02:43.151873    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0816 10:02:43.171464    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0816 10:02:43.171532    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 10:02:43.191045    3758 provision.go:87] duration metric: took 231.381757ms to configureAuth
	I0816 10:02:43.191057    3758 buildroot.go:189] setting minikube options for container-runtime
	I0816 10:02:43.191187    3758 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:02:43.191200    3758 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:02:43.191321    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:43.191410    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:43.191497    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:43.191580    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:43.191658    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:43.191759    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:43.191891    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:02:43.191898    3758 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0816 10:02:43.256733    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0816 10:02:43.256744    3758 buildroot.go:70] root file system type: tmpfs
	I0816 10:02:43.256823    3758 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0816 10:02:43.256835    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:43.256961    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:43.257048    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:43.257142    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:43.257220    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:43.257364    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:43.257505    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:02:43.257553    3758 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0816 10:02:43.330709    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0816 10:02:43.330729    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:43.330874    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:43.330977    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:43.331073    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:43.331167    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:43.331293    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:43.331440    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:02:43.331451    3758 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0816 10:02:44.884634    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0816 10:02:44.884648    3758 main.go:141] libmachine: Checking connection to Docker...
	I0816 10:02:44.884655    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetURL
	I0816 10:02:44.884793    3758 main.go:141] libmachine: Docker is up and running!
	I0816 10:02:44.884799    3758 main.go:141] libmachine: Reticulating splines...
	I0816 10:02:44.884804    3758 client.go:171] duration metric: took 14.112778669s to LocalClient.Create
	I0816 10:02:44.884820    3758 start.go:167] duration metric: took 14.112810851s to libmachine.API.Create "ha-286000"
	I0816 10:02:44.884826    3758 start.go:293] postStartSetup for "ha-286000-m02" (driver="hyperkit")
	I0816 10:02:44.884832    3758 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 10:02:44.884843    3758 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:02:44.884991    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 10:02:44.885004    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:44.885101    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:44.885204    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:44.885335    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:44.885428    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/id_rsa Username:docker}
	I0816 10:02:44.925701    3758 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 10:02:44.929599    3758 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 10:02:44.929613    3758 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19461-1276/.minikube/addons for local assets ...
	I0816 10:02:44.929722    3758 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19461-1276/.minikube/files for local assets ...
	I0816 10:02:44.929908    3758 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> 18312.pem in /etc/ssl/certs
	I0816 10:02:44.929914    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> /etc/ssl/certs/18312.pem
	I0816 10:02:44.930119    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 10:02:44.940484    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem --> /etc/ssl/certs/18312.pem (1708 bytes)
	I0816 10:02:44.973200    3758 start.go:296] duration metric: took 88.36731ms for postStartSetup
	I0816 10:02:44.973230    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetConfigRaw
	I0816 10:02:44.973848    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetIP
	I0816 10:02:44.973996    3758 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:02:44.974351    3758 start.go:128] duration metric: took 14.23599979s to createHost
	I0816 10:02:44.974366    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:44.974448    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:44.974568    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:44.974660    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:44.974744    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:44.974844    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:44.974966    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:02:44.974973    3758 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 10:02:45.036267    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723827764.932365297
	
	I0816 10:02:45.036285    3758 fix.go:216] guest clock: 1723827764.932365297
	I0816 10:02:45.036291    3758 fix.go:229] Guest: 2024-08-16 10:02:44.932365297 -0700 PDT Remote: 2024-08-16 10:02:44.97436 -0700 PDT m=+56.899083401 (delta=-41.994703ms)
	I0816 10:02:45.036301    3758 fix.go:200] guest clock delta is within tolerance: -41.994703ms
	I0816 10:02:45.036306    3758 start.go:83] releasing machines lock for "ha-286000-m02", held for 14.298063452s
	I0816 10:02:45.036338    3758 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:02:45.036468    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetIP
	I0816 10:02:45.062334    3758 out.go:177] * Found network options:
	I0816 10:02:45.105996    3758 out.go:177]   - NO_PROXY=192.169.0.5
	W0816 10:02:45.128174    3758 proxy.go:119] fail to check proxy env: Error ip not in block
	I0816 10:02:45.128216    3758 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:02:45.128829    3758 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:02:45.128969    3758 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:02:45.129060    3758 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 10:02:45.129088    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	W0816 10:02:45.129115    3758 proxy.go:119] fail to check proxy env: Error ip not in block
	I0816 10:02:45.129222    3758 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0816 10:02:45.129232    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:45.129238    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:45.129370    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:45.129393    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:45.129500    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:45.129515    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:45.129631    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/id_rsa Username:docker}
	I0816 10:02:45.129647    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:45.129727    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/id_rsa Username:docker}
	W0816 10:02:45.164265    3758 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 10:02:45.164324    3758 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 10:02:45.211468    3758 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 10:02:45.211489    3758 start.go:495] detecting cgroup driver to use...
	I0816 10:02:45.211595    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 10:02:45.228144    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0816 10:02:45.237310    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0816 10:02:45.246272    3758 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0816 10:02:45.246316    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0816 10:02:45.255987    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 10:02:45.265400    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0816 10:02:45.274661    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 10:02:45.283628    3758 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 10:02:45.292648    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0816 10:02:45.302207    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0816 10:02:45.311314    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0816 10:02:45.320414    3758 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 10:02:45.328532    3758 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 10:02:45.337229    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:45.444954    3758 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0816 10:02:45.464569    3758 start.go:495] detecting cgroup driver to use...
	I0816 10:02:45.464652    3758 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0816 10:02:45.488292    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 10:02:45.500162    3758 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 10:02:45.561835    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 10:02:45.572027    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 10:02:45.582122    3758 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0816 10:02:45.603011    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 10:02:45.613488    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 10:02:45.628434    3758 ssh_runner.go:195] Run: which cri-dockerd
	I0816 10:02:45.631358    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0816 10:02:45.638496    3758 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0816 10:02:45.652676    3758 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0816 10:02:45.755971    3758 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0816 10:02:45.860533    3758 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0816 10:02:45.860559    3758 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0816 10:02:45.874570    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:45.976095    3758 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0816 10:02:48.459358    3758 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.483288325s)
	I0816 10:02:48.459417    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0816 10:02:48.469882    3758 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0816 10:02:48.484429    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0816 10:02:48.495038    3758 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0816 10:02:48.597129    3758 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0816 10:02:48.691412    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:48.797592    3758 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0816 10:02:48.812075    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0816 10:02:48.823307    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:48.918652    3758 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0816 10:02:48.980191    3758 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0816 10:02:48.980257    3758 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0816 10:02:48.984954    3758 start.go:563] Will wait 60s for crictl version
	I0816 10:02:48.985011    3758 ssh_runner.go:195] Run: which crictl
	I0816 10:02:48.988185    3758 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 10:02:49.015262    3758 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0816 10:02:49.015344    3758 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0816 10:02:49.032341    3758 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0816 10:02:49.078128    3758 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0816 10:02:49.137645    3758 out.go:177]   - env NO_PROXY=192.169.0.5
	I0816 10:02:49.176661    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetIP
	I0816 10:02:49.177129    3758 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0816 10:02:49.181778    3758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 10:02:49.191522    3758 mustload.go:65] Loading cluster: ha-286000
	I0816 10:02:49.191665    3758 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:02:49.191885    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:02:49.191909    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:02:49.200721    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51064
	I0816 10:02:49.201069    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:02:49.201413    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:02:49.201424    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:02:49.201648    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:02:49.201776    3758 main.go:141] libmachine: (ha-286000) Calling .GetState
	I0816 10:02:49.201862    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:49.201960    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:02:49.202920    3758 host.go:66] Checking if "ha-286000" exists ...
	I0816 10:02:49.203188    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:02:49.203205    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:02:49.211926    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51066
	I0816 10:02:49.212258    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:02:49.212635    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:02:49.212652    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:02:49.212860    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:02:49.212967    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:49.213056    3758 certs.go:68] Setting up /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000 for IP: 192.169.0.6
	I0816 10:02:49.213061    3758 certs.go:194] generating shared ca certs ...
	I0816 10:02:49.213071    3758 certs.go:226] acquiring lock for ca certs: {Name:mka549fbc467849834d2267da204028c49d88a3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:49.213229    3758 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key
	I0816 10:02:49.213305    3758 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key
	I0816 10:02:49.213314    3758 certs.go:256] generating profile certs ...
	I0816 10:02:49.213406    3758 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.key
	I0816 10:02:49.213429    3758 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.5eb44438
	I0816 10:02:49.213443    3758 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.5eb44438 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.254]
	I0816 10:02:49.263058    3758 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.5eb44438 ...
	I0816 10:02:49.263077    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.5eb44438: {Name:mk266d5e842df442c104f85113e06d171d5a89ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:49.263395    3758 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.5eb44438 ...
	I0816 10:02:49.263404    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.5eb44438: {Name:mk1428912a4c1edaafe55f9bff302c19ad337785 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:49.263623    3758 certs.go:381] copying /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.5eb44438 -> /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt
	I0816 10:02:49.263846    3758 certs.go:385] copying /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.5eb44438 -> /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key
	I0816 10:02:49.264093    3758 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key
	I0816 10:02:49.264103    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0816 10:02:49.264125    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0816 10:02:49.264143    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0816 10:02:49.264163    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0816 10:02:49.264180    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0816 10:02:49.264199    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0816 10:02:49.264223    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0816 10:02:49.264242    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0816 10:02:49.264331    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem (1338 bytes)
	W0816 10:02:49.264378    3758 certs.go:480] ignoring /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831_empty.pem, impossibly tiny 0 bytes
	I0816 10:02:49.264386    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 10:02:49.264418    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem (1078 bytes)
	I0816 10:02:49.264449    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem (1123 bytes)
	I0816 10:02:49.264477    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem (1679 bytes)
	I0816 10:02:49.264545    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem (1708 bytes)
	I0816 10:02:49.264593    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> /usr/share/ca-certificates/18312.pem
	I0816 10:02:49.264614    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:02:49.264634    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem -> /usr/share/ca-certificates/1831.pem
	I0816 10:02:49.264663    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:49.264814    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:49.264897    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:49.265014    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:49.265108    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:02:49.296192    3758 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0816 10:02:49.299577    3758 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0816 10:02:49.312765    3758 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0816 10:02:49.316551    3758 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0816 10:02:49.332167    3758 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0816 10:02:49.336199    3758 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0816 10:02:49.349166    3758 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0816 10:02:49.354242    3758 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0816 10:02:49.364119    3758 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0816 10:02:49.367349    3758 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0816 10:02:49.375423    3758 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0816 10:02:49.378717    3758 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0816 10:02:49.386918    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 10:02:49.408036    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0816 10:02:49.428163    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 10:02:49.448952    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 10:02:49.468986    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0816 10:02:49.489674    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0816 10:02:49.509597    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 10:02:49.530175    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 10:02:49.550578    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem --> /usr/share/ca-certificates/18312.pem (1708 bytes)
	I0816 10:02:49.571266    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 10:02:49.590995    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem --> /usr/share/ca-certificates/1831.pem (1338 bytes)
	I0816 10:02:49.611766    3758 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0816 10:02:49.625222    3758 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0816 10:02:49.638852    3758 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0816 10:02:49.653497    3758 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0816 10:02:49.667098    3758 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0816 10:02:49.680935    3758 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0816 10:02:49.695437    3758 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0816 10:02:49.708684    3758 ssh_runner.go:195] Run: openssl version
	I0816 10:02:49.712979    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18312.pem && ln -fs /usr/share/ca-certificates/18312.pem /etc/ssl/certs/18312.pem"
	I0816 10:02:49.721565    3758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18312.pem
	I0816 10:02:49.725513    3758 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 16:57 /usr/share/ca-certificates/18312.pem
	I0816 10:02:49.725580    3758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18312.pem
	I0816 10:02:49.730364    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18312.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 10:02:49.739184    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 10:02:49.747494    3758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:02:49.750923    3758 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 16:48 /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:02:49.750955    3758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:02:49.755195    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 10:02:49.763703    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1831.pem && ln -fs /usr/share/ca-certificates/1831.pem /etc/ssl/certs/1831.pem"
	I0816 10:02:49.772914    3758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1831.pem
	I0816 10:02:49.776377    3758 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 16:57 /usr/share/ca-certificates/1831.pem
	I0816 10:02:49.776421    3758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1831.pem
	I0816 10:02:49.780731    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1831.pem /etc/ssl/certs/51391683.0"
	I0816 10:02:49.789078    3758 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 10:02:49.792242    3758 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0816 10:02:49.792277    3758 kubeadm.go:934] updating node {m02 192.169.0.6 8443 v1.31.0 docker true true} ...
	I0816 10:02:49.792332    3758 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-286000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-286000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 10:02:49.792349    3758 kube-vip.go:115] generating kube-vip config ...
	I0816 10:02:49.792381    3758 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0816 10:02:49.804469    3758 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0816 10:02:49.804511    3758 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0816 10:02:49.804567    3758 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 10:02:49.812921    3758 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0816 10:02:49.812983    3758 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0816 10:02:49.821031    3758 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256 -> /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubelet
	I0816 10:02:49.821035    3758 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256 -> /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubeadm
	I0816 10:02:49.821039    3758 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubectl
	I0816 10:02:51.911279    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0816 10:02:51.911374    3758 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0816 10:02:51.915115    3758 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0816 10:02:51.915150    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0816 10:02:52.537289    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0816 10:02:52.537383    3758 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0816 10:02:52.540898    3758 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0816 10:02:52.540929    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0816 10:02:53.267467    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 10:02:53.278589    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0816 10:02:53.278731    3758 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0816 10:02:53.282055    3758 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0816 10:02:53.282073    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0816 10:02:53.498714    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0816 10:02:53.506112    3758 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0816 10:02:53.519672    3758 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 10:02:53.533551    3758 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0816 10:02:53.548024    3758 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0816 10:02:53.551124    3758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 10:02:53.560470    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:53.659270    3758 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 10:02:53.674625    3758 host.go:66] Checking if "ha-286000" exists ...
	I0816 10:02:53.674900    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:02:53.674923    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:02:53.683891    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51093
	I0816 10:02:53.684256    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:02:53.684641    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:02:53.684658    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:02:53.684885    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:02:53.685003    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:53.685084    3758 start.go:317] joinCluster: &{Name:ha-286000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clu
sterName:ha-286000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpira
tion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 10:02:53.685155    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0816 10:02:53.685167    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:53.685248    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:53.685339    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:53.685423    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:53.685503    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:02:53.765911    3758 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:02:53.765972    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token mpmdnf.6rzc0wpb01e0t6lz --discovery-token-ca-cert-hash sha256:995590413ea712c4f7db2cf849d28264ebc291e6cd53d741ce1d22c1daaa1602 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-286000-m02 --control-plane --apiserver-advertise-address=192.169.0.6 --apiserver-bind-port=8443"
	I0816 10:03:22.070391    3758 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token mpmdnf.6rzc0wpb01e0t6lz --discovery-token-ca-cert-hash sha256:995590413ea712c4f7db2cf849d28264ebc291e6cd53d741ce1d22c1daaa1602 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-286000-m02 --control-plane --apiserver-advertise-address=192.169.0.6 --apiserver-bind-port=8443": (28.304928658s)
	I0816 10:03:22.070414    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0816 10:03:22.427732    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-286000-m02 minikube.k8s.io/updated_at=2024_08_16T10_03_22_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd minikube.k8s.io/name=ha-286000 minikube.k8s.io/primary=false
	I0816 10:03:22.531211    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-286000-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0816 10:03:22.629050    3758 start.go:319] duration metric: took 28.944509117s to joinCluster
	I0816 10:03:22.629105    3758 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:03:22.629360    3758 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:03:22.652598    3758 out.go:177] * Verifying Kubernetes components...
	I0816 10:03:22.726472    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:03:22.952731    3758 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 10:03:22.975401    3758 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19461-1276/kubeconfig
	I0816 10:03:22.975621    3758 kapi.go:59] client config for ha-286000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.key", CAFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xa202f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0816 10:03:22.975663    3758 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0816 10:03:22.975836    3758 node_ready.go:35] waiting up to 6m0s for node "ha-286000-m02" to be "Ready" ...
	I0816 10:03:22.975886    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:22.975891    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:22.975897    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:22.975901    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:22.983180    3758 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0816 10:03:23.476314    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:23.476365    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:23.476380    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:23.476394    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:23.502874    3758 round_trippers.go:574] Response Status: 200 OK in 26 milliseconds
	I0816 10:03:23.977290    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:23.977304    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:23.977311    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:23.977315    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:23.979495    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:24.477835    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:24.477851    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:24.477858    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:24.477861    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:24.480701    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:24.977514    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:24.977568    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:24.977580    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:24.977588    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:24.980856    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:24.981299    3758 node_ready.go:53] node "ha-286000-m02" has status "Ready":"False"
	I0816 10:03:25.476235    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:25.476252    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:25.476260    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:25.476277    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:25.478567    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:25.976418    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:25.976469    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:25.976477    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:25.976480    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:25.978730    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:26.476423    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:26.476439    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:26.476445    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:26.476448    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:26.478424    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:26.975919    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:26.975934    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:26.975940    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:26.975945    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:26.977809    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:27.475966    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:27.475981    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:27.475987    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:27.475990    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:27.477769    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:27.478121    3758 node_ready.go:53] node "ha-286000-m02" has status "Ready":"False"
	I0816 10:03:27.977123    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:27.977139    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:27.977146    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:27.977150    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:27.979266    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:28.477140    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:28.477226    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:28.477254    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:28.477264    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:28.480386    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:28.977368    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:28.977407    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:28.977420    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:28.977427    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:28.980479    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:29.477521    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:29.477545    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:29.477556    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:29.477565    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:29.481179    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:29.481510    3758 node_ready.go:53] node "ha-286000-m02" has status "Ready":"False"
	I0816 10:03:29.975906    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:29.975932    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:29.975943    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:29.975949    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:29.978633    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:30.477171    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:30.477189    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:30.477198    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:30.477201    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:30.480005    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:30.977947    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:30.977973    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:30.977993    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:30.977998    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:30.982219    3758 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 10:03:31.476252    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:31.476327    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:31.476341    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:31.476347    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:31.479417    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:31.975987    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:31.976012    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:31.976023    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:31.976029    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:31.979409    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:31.979880    3758 node_ready.go:53] node "ha-286000-m02" has status "Ready":"False"
	I0816 10:03:32.477180    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:32.477207    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:32.477229    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:32.477240    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:32.480686    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:32.976225    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:32.976250    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:32.976261    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:32.976269    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:32.979533    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:33.475956    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:33.475980    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:33.475991    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:33.475997    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:33.479025    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:33.977111    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:33.977185    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:33.977198    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:33.977204    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:33.980426    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:33.980922    3758 node_ready.go:53] node "ha-286000-m02" has status "Ready":"False"
	I0816 10:03:34.476455    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:34.476470    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:34.476477    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:34.476481    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:34.478517    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:34.976996    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:34.977037    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:34.977049    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:34.977084    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:34.980500    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:35.476045    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:35.476068    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:35.476079    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:35.476086    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:35.479058    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:35.975920    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:35.975944    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:35.975956    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:35.975962    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:35.979071    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:36.477845    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:36.477872    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:36.477885    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:36.477946    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:36.481401    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:36.481890    3758 node_ready.go:53] node "ha-286000-m02" has status "Ready":"False"
	I0816 10:03:36.977033    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:36.977057    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:36.977069    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:36.977076    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:36.980476    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:37.477095    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:37.477120    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:37.477133    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:37.477141    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:37.480485    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:37.976303    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:37.976320    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:37.976343    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:37.976348    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:37.978551    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:38.476492    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:38.476517    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.476528    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.476536    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:38.480190    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:38.976091    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:38.976114    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.976124    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.976129    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:38.979741    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:38.980051    3758 node_ready.go:49] node "ha-286000-m02" has status "Ready":"True"
	I0816 10:03:38.980066    3758 node_ready.go:38] duration metric: took 16.004516347s for node "ha-286000-m02" to be "Ready" ...
	I0816 10:03:38.980077    3758 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 10:03:38.980127    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0816 10:03:38.980135    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.980142    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.980152    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:38.982937    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:38.987546    3758 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-2kqjf" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:38.987591    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-2kqjf
	I0816 10:03:38.987597    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.987603    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.987607    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:38.989339    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:38.989748    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:38.989758    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.989764    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.989768    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:38.991324    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:38.991660    3758 pod_ready.go:93] pod "coredns-6f6b679f8f-2kqjf" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:38.991671    3758 pod_ready.go:82] duration metric: took 4.111381ms for pod "coredns-6f6b679f8f-2kqjf" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:38.991678    3758 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-rfbz7" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:38.991708    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-rfbz7
	I0816 10:03:38.991713    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.991718    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.991721    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:38.993245    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:38.993624    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:38.993631    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.993640    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.993644    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:38.995095    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:38.995419    3758 pod_ready.go:93] pod "coredns-6f6b679f8f-rfbz7" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:38.995427    3758 pod_ready.go:82] duration metric: took 3.744646ms for pod "coredns-6f6b679f8f-rfbz7" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:38.995433    3758 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:38.995467    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-286000
	I0816 10:03:38.995472    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.995478    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.995482    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:38.997007    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:38.997390    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:38.997397    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.997403    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.997406    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:38.998839    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:38.999151    3758 pod_ready.go:93] pod "etcd-ha-286000" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:38.999160    3758 pod_ready.go:82] duration metric: took 3.721879ms for pod "etcd-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:38.999165    3758 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:38.999202    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-286000-m02
	I0816 10:03:38.999207    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.999213    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.999217    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:39.001189    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:39.001547    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:39.001554    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:39.001559    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:39.001563    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:39.003004    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:39.003290    3758 pod_ready.go:93] pod "etcd-ha-286000-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:39.003298    3758 pod_ready.go:82] duration metric: took 4.127582ms for pod "etcd-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:39.003307    3758 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:39.177834    3758 request.go:632] Waited for 174.443582ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-286000
	I0816 10:03:39.177948    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-286000
	I0816 10:03:39.177963    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:39.177974    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:39.177981    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:39.181371    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:39.376438    3758 request.go:632] Waited for 194.538822ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:39.376562    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:39.376574    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:39.376586    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:39.376596    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:39.379989    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:39.380425    3758 pod_ready.go:93] pod "kube-apiserver-ha-286000" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:39.380437    3758 pod_ready.go:82] duration metric: took 377.131323ms for pod "kube-apiserver-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:39.380446    3758 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:39.576633    3758 request.go:632] Waited for 196.095353ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-286000-m02
	I0816 10:03:39.576687    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-286000-m02
	I0816 10:03:39.576696    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:39.576769    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:39.576793    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:39.580164    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:39.776642    3758 request.go:632] Waited for 195.66937ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:39.776725    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:39.776735    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:39.776746    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:39.776752    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:39.780273    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:39.780714    3758 pod_ready.go:93] pod "kube-apiserver-ha-286000-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:39.780723    3758 pod_ready.go:82] duration metric: took 400.274102ms for pod "kube-apiserver-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:39.780730    3758 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:39.977598    3758 request.go:632] Waited for 196.714211ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-286000
	I0816 10:03:39.977650    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-286000
	I0816 10:03:39.977659    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:39.977670    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:39.977679    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:39.981079    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:40.177460    3758 request.go:632] Waited for 195.94045ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:40.177528    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:40.177589    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:40.177603    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:40.177609    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:40.180971    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:40.181992    3758 pod_ready.go:93] pod "kube-controller-manager-ha-286000" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:40.182036    3758 pod_ready.go:82] duration metric: took 401.277819ms for pod "kube-controller-manager-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:40.182047    3758 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:40.376258    3758 request.go:632] Waited for 194.111468ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-286000-m02
	I0816 10:03:40.376327    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-286000-m02
	I0816 10:03:40.376336    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:40.376347    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:40.376355    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:40.379476    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:40.576506    3758 request.go:632] Waited for 196.409077ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:40.576552    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:40.576560    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:40.576571    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:40.576580    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:40.579964    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:40.580641    3758 pod_ready.go:93] pod "kube-controller-manager-ha-286000-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:40.580654    3758 pod_ready.go:82] duration metric: took 398.607786ms for pod "kube-controller-manager-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:40.580663    3758 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pt669" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:40.778194    3758 request.go:632] Waited for 197.469682ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pt669
	I0816 10:03:40.778324    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pt669
	I0816 10:03:40.778333    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:40.778345    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:40.778352    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:40.781925    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:40.976623    3758 request.go:632] Waited for 194.04689ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:40.976752    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:40.976761    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:40.976772    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:40.976780    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:40.980022    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:40.980473    3758 pod_ready.go:93] pod "kube-proxy-pt669" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:40.980485    3758 pod_ready.go:82] duration metric: took 399.822578ms for pod "kube-proxy-pt669" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:40.980494    3758 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-w4nt2" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:41.177361    3758 request.go:632] Waited for 196.811255ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w4nt2
	I0816 10:03:41.177533    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w4nt2
	I0816 10:03:41.177545    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:41.177556    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:41.177563    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:41.181543    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:41.377692    3758 request.go:632] Waited for 195.583238ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:41.377791    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:41.377802    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:41.377812    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:41.377819    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:41.381134    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:41.381716    3758 pod_ready.go:93] pod "kube-proxy-w4nt2" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:41.381726    3758 pod_ready.go:82] duration metric: took 401.234529ms for pod "kube-proxy-w4nt2" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:41.381733    3758 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:41.577020    3758 request.go:632] Waited for 195.246357ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-286000
	I0816 10:03:41.577073    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-286000
	I0816 10:03:41.577125    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:41.577138    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:41.577147    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:41.580051    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:41.777879    3758 request.go:632] Waited for 197.366145ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:41.777974    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:41.777983    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:41.777995    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:41.778003    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:41.781434    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:41.782012    3758 pod_ready.go:93] pod "kube-scheduler-ha-286000" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:41.782023    3758 pod_ready.go:82] duration metric: took 400.292624ms for pod "kube-scheduler-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:41.782051    3758 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:41.976356    3758 request.go:632] Waited for 194.23341ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-286000-m02
	I0816 10:03:41.976463    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-286000-m02
	I0816 10:03:41.976473    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:41.976485    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:41.976494    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:41.980088    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:42.176952    3758 request.go:632] Waited for 196.161737ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:42.177009    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:42.177018    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:42.177026    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:42.177083    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:42.182888    3758 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0816 10:03:42.183179    3758 pod_ready.go:93] pod "kube-scheduler-ha-286000-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:42.183188    3758 pod_ready.go:82] duration metric: took 401.133714ms for pod "kube-scheduler-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:42.183203    3758 pod_ready.go:39] duration metric: took 3.203173842s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 10:03:42.183221    3758 api_server.go:52] waiting for apiserver process to appear ...
	I0816 10:03:42.183274    3758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 10:03:42.195671    3758 api_server.go:72] duration metric: took 19.566910589s to wait for apiserver process to appear ...
	I0816 10:03:42.195682    3758 api_server.go:88] waiting for apiserver healthz status ...
	I0816 10:03:42.195698    3758 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0816 10:03:42.198837    3758 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0816 10:03:42.198870    3758 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0816 10:03:42.198874    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:42.198880    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:42.198884    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:42.199389    3758 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0816 10:03:42.199468    3758 api_server.go:141] control plane version: v1.31.0
	I0816 10:03:42.199478    3758 api_server.go:131] duration metric: took 3.791893ms to wait for apiserver health ...
	I0816 10:03:42.199486    3758 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 10:03:42.378048    3758 request.go:632] Waited for 178.500938ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0816 10:03:42.378156    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0816 10:03:42.378166    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:42.378178    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:42.378186    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:42.382776    3758 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 10:03:42.386068    3758 system_pods.go:59] 17 kube-system pods found
	I0816 10:03:42.386083    3758 system_pods.go:61] "coredns-6f6b679f8f-2kqjf" [be18cd09-3e3b-4749-bf29-7001a879f593] Running
	I0816 10:03:42.386087    3758 system_pods.go:61] "coredns-6f6b679f8f-rfbz7" [a593e27a-e38b-46c7-a603-44963c31c095] Running
	I0816 10:03:42.386090    3758 system_pods.go:61] "etcd-ha-286000" [c45bc623-befe-4e88-8da1-bb4f7d7be5b2] Running
	I0816 10:03:42.386093    3758 system_pods.go:61] "etcd-ha-286000-m02" [e14fbe0c-4527-4cbb-8c13-01c0b32a8d15] Running
	I0816 10:03:42.386096    3758 system_pods.go:61] "kindnet-t9kjf" [20c57f28-5e86-44a4-8a2a-07e1e240abf2] Running
	I0816 10:03:42.386099    3758 system_pods.go:61] "kindnet-whqxb" [1cc5291b-52ea-44b5-b87c-607d46b5281a] Running
	I0816 10:03:42.386102    3758 system_pods.go:61] "kube-apiserver-ha-286000" [c0d298a9-931b-4084-9e5f-01a88de27456] Running
	I0816 10:03:42.386104    3758 system_pods.go:61] "kube-apiserver-ha-286000-m02" [3666c131-2b88-483a-ab8c-b18cb8db7d6f] Running
	I0816 10:03:42.386120    3758 system_pods.go:61] "kube-controller-manager-ha-286000" [5a4f4c5d-d89a-4e5f-b022-479ca31f64cd] Running
	I0816 10:03:42.386127    3758 system_pods.go:61] "kube-controller-manager-ha-286000-m02" [a347c3d4-fa47-49ea-93a0-83087017a2bd] Running
	I0816 10:03:42.386130    3758 system_pods.go:61] "kube-proxy-pt669" [798c956f-8f08-465a-b0d9-90b65f703f80] Running
	I0816 10:03:42.386139    3758 system_pods.go:61] "kube-proxy-w4nt2" [79ce2248-f8fd-4a3b-aeb7-f81d7ad64564] Running
	I0816 10:03:42.386143    3758 system_pods.go:61] "kube-scheduler-ha-286000" [b4af80e0-cbb0-4257-a212-3585faff0875] Running
	I0816 10:03:42.386145    3758 system_pods.go:61] "kube-scheduler-ha-286000-m02" [89920e93-c959-47ec-8a2c-f2b63e8c3d3a] Running
	I0816 10:03:42.386153    3758 system_pods.go:61] "kube-vip-ha-286000" [b0f85be2-2afb-44b0-a701-161b24529349] Running
	I0816 10:03:42.386156    3758 system_pods.go:61] "kube-vip-ha-286000-m02" [4982dc53-946d-4f75-8e9e-452ef39ff2fa] Running
	I0816 10:03:42.386159    3758 system_pods.go:61] "storage-provisioner" [4805d53b-2db3-4092-a3f2-d4a854e93adc] Running
	I0816 10:03:42.386163    3758 system_pods.go:74] duration metric: took 186.675963ms to wait for pod list to return data ...
	I0816 10:03:42.386170    3758 default_sa.go:34] waiting for default service account to be created ...
	I0816 10:03:42.577030    3758 request.go:632] Waited for 190.795237ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0816 10:03:42.577183    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0816 10:03:42.577198    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:42.577209    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:42.577215    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:42.581020    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:42.581204    3758 default_sa.go:45] found service account: "default"
	I0816 10:03:42.581216    3758 default_sa.go:55] duration metric: took 195.045213ms for default service account to be created ...
	I0816 10:03:42.581224    3758 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 10:03:42.777160    3758 request.go:632] Waited for 195.848894ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0816 10:03:42.777252    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0816 10:03:42.777268    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:42.777285    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:42.777303    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:42.781637    3758 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 10:03:42.784962    3758 system_pods.go:86] 17 kube-system pods found
	I0816 10:03:42.784974    3758 system_pods.go:89] "coredns-6f6b679f8f-2kqjf" [be18cd09-3e3b-4749-bf29-7001a879f593] Running
	I0816 10:03:42.784978    3758 system_pods.go:89] "coredns-6f6b679f8f-rfbz7" [a593e27a-e38b-46c7-a603-44963c31c095] Running
	I0816 10:03:42.784981    3758 system_pods.go:89] "etcd-ha-286000" [c45bc623-befe-4e88-8da1-bb4f7d7be5b2] Running
	I0816 10:03:42.784984    3758 system_pods.go:89] "etcd-ha-286000-m02" [e14fbe0c-4527-4cbb-8c13-01c0b32a8d15] Running
	I0816 10:03:42.784987    3758 system_pods.go:89] "kindnet-t9kjf" [20c57f28-5e86-44a4-8a2a-07e1e240abf2] Running
	I0816 10:03:42.784990    3758 system_pods.go:89] "kindnet-whqxb" [1cc5291b-52ea-44b5-b87c-607d46b5281a] Running
	I0816 10:03:42.784992    3758 system_pods.go:89] "kube-apiserver-ha-286000" [c0d298a9-931b-4084-9e5f-01a88de27456] Running
	I0816 10:03:42.784995    3758 system_pods.go:89] "kube-apiserver-ha-286000-m02" [3666c131-2b88-483a-ab8c-b18cb8db7d6f] Running
	I0816 10:03:42.784997    3758 system_pods.go:89] "kube-controller-manager-ha-286000" [5a4f4c5d-d89a-4e5f-b022-479ca31f64cd] Running
	I0816 10:03:42.785001    3758 system_pods.go:89] "kube-controller-manager-ha-286000-m02" [a347c3d4-fa47-49ea-93a0-83087017a2bd] Running
	I0816 10:03:42.785003    3758 system_pods.go:89] "kube-proxy-pt669" [798c956f-8f08-465a-b0d9-90b65f703f80] Running
	I0816 10:03:42.785006    3758 system_pods.go:89] "kube-proxy-w4nt2" [79ce2248-f8fd-4a3b-aeb7-f81d7ad64564] Running
	I0816 10:03:42.785009    3758 system_pods.go:89] "kube-scheduler-ha-286000" [b4af80e0-cbb0-4257-a212-3585faff0875] Running
	I0816 10:03:42.785011    3758 system_pods.go:89] "kube-scheduler-ha-286000-m02" [89920e93-c959-47ec-8a2c-f2b63e8c3d3a] Running
	I0816 10:03:42.785015    3758 system_pods.go:89] "kube-vip-ha-286000" [b0f85be2-2afb-44b0-a701-161b24529349] Running
	I0816 10:03:42.785017    3758 system_pods.go:89] "kube-vip-ha-286000-m02" [4982dc53-946d-4f75-8e9e-452ef39ff2fa] Running
	I0816 10:03:42.785023    3758 system_pods.go:89] "storage-provisioner" [4805d53b-2db3-4092-a3f2-d4a854e93adc] Running
	I0816 10:03:42.785028    3758 system_pods.go:126] duration metric: took 203.80313ms to wait for k8s-apps to be running ...
	I0816 10:03:42.785034    3758 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 10:03:42.785088    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 10:03:42.796381    3758 system_svc.go:56] duration metric: took 11.343439ms WaitForService to wait for kubelet
	I0816 10:03:42.796397    3758 kubeadm.go:582] duration metric: took 20.16764951s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 10:03:42.796409    3758 node_conditions.go:102] verifying NodePressure condition ...
	I0816 10:03:42.977594    3758 request.go:632] Waited for 181.143005ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0816 10:03:42.977695    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0816 10:03:42.977706    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:42.977719    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:42.977724    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:42.981373    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:42.981988    3758 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 10:03:42.982013    3758 node_conditions.go:123] node cpu capacity is 2
	I0816 10:03:42.982039    3758 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 10:03:42.982043    3758 node_conditions.go:123] node cpu capacity is 2
	I0816 10:03:42.982046    3758 node_conditions.go:105] duration metric: took 185.637747ms to run NodePressure ...
	I0816 10:03:42.982055    3758 start.go:241] waiting for startup goroutines ...
	I0816 10:03:42.982079    3758 start.go:255] writing updated cluster config ...
	I0816 10:03:43.003740    3758 out.go:201] 
	I0816 10:03:43.024885    3758 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:03:43.024976    3758 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:03:43.047776    3758 out.go:177] * Starting "ha-286000-m03" control-plane node in "ha-286000" cluster
	I0816 10:03:43.137621    3758 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 10:03:43.137655    3758 cache.go:56] Caching tarball of preloaded images
	I0816 10:03:43.137861    3758 preload.go:172] Found /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0816 10:03:43.137884    3758 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 10:03:43.138022    3758 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:03:43.159475    3758 start.go:360] acquireMachinesLock for ha-286000-m03: {Name:mk0510f0432b1e24fd07d899155e85dff8756d5a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:03:43.159592    3758 start.go:364] duration metric: took 91.442µs to acquireMachinesLock for "ha-286000-m03"
	I0816 10:03:43.159625    3758 start.go:93] Provisioning new machine with config: &{Name:ha-286000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:ha-286000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ing
ress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:03:43.159736    3758 start.go:125] createHost starting for "m03" (driver="hyperkit")
	I0816 10:03:43.180569    3758 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0816 10:03:43.180719    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:03:43.180762    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:03:43.191087    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51098
	I0816 10:03:43.191558    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:03:43.191889    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:03:43.191899    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:03:43.192106    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:03:43.192302    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetMachineName
	I0816 10:03:43.192394    3758 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:03:43.192506    3758 start.go:159] libmachine.API.Create for "ha-286000" (driver="hyperkit")
	I0816 10:03:43.192526    3758 client.go:168] LocalClient.Create starting
	I0816 10:03:43.192559    3758 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem
	I0816 10:03:43.192606    3758 main.go:141] libmachine: Decoding PEM data...
	I0816 10:03:43.192620    3758 main.go:141] libmachine: Parsing certificate...
	I0816 10:03:43.192675    3758 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem
	I0816 10:03:43.192704    3758 main.go:141] libmachine: Decoding PEM data...
	I0816 10:03:43.192714    3758 main.go:141] libmachine: Parsing certificate...
	I0816 10:03:43.192726    3758 main.go:141] libmachine: Running pre-create checks...
	I0816 10:03:43.192731    3758 main.go:141] libmachine: (ha-286000-m03) Calling .PreCreateCheck
	I0816 10:03:43.192813    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:43.192833    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetConfigRaw
	I0816 10:03:43.202038    3758 main.go:141] libmachine: Creating machine...
	I0816 10:03:43.202062    3758 main.go:141] libmachine: (ha-286000-m03) Calling .Create
	I0816 10:03:43.202302    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:43.202574    3758 main.go:141] libmachine: (ha-286000-m03) DBG | I0816 10:03:43.202284    3848 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19461-1276/.minikube
	I0816 10:03:43.202705    3758 main.go:141] libmachine: (ha-286000-m03) Downloading /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19461-1276/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0816 10:03:43.529965    3758 main.go:141] libmachine: (ha-286000-m03) DBG | I0816 10:03:43.529902    3848 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/id_rsa...
	I0816 10:03:43.631057    3758 main.go:141] libmachine: (ha-286000-m03) DBG | I0816 10:03:43.630965    3848 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/ha-286000-m03.rawdisk...
	I0816 10:03:43.631074    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Writing magic tar header
	I0816 10:03:43.631083    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Writing SSH key tar header
	I0816 10:03:43.631711    3758 main.go:141] libmachine: (ha-286000-m03) DBG | I0816 10:03:43.631681    3848 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03 ...
	I0816 10:03:44.155270    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:44.155286    3758 main.go:141] libmachine: (ha-286000-m03) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/hyperkit.pid
	I0816 10:03:44.155318    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Using UUID 408efb18-ff91-428f-b414-6eb0d982a779
	I0816 10:03:44.181981    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Generated MAC 8a:e:de:5b:b5:8b
	I0816 10:03:44.182010    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000
	I0816 10:03:44.182059    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"408efb18-ff91-428f-b414-6eb0d982a779", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 10:03:44.182101    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"408efb18-ff91-428f-b414-6eb0d982a779", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 10:03:44.182228    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "408efb18-ff91-428f-b414-6eb0d982a779", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/ha-286000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/bzimage,/Users/jenkins/minikube-integration/19461-1276/.minikube/machine
s/ha-286000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000"}
	I0816 10:03:44.182306    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 408efb18-ff91-428f-b414-6eb0d982a779 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/ha-286000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/bzimage,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000"
	I0816 10:03:44.182332    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0816 10:03:44.185479    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 DEBUG: hyperkit: Pid is 3849
	I0816 10:03:44.185900    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Attempt 0
	I0816 10:03:44.185912    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:44.186025    3758 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid from json: 3849
	I0816 10:03:44.186994    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Searching for 8a:e:de:5b:b5:8b in /var/db/dhcpd_leases ...
	I0816 10:03:44.187062    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0816 10:03:44.187079    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0d7b0}
	I0816 10:03:44.187114    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:03:44.187142    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:03:44.187168    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:03:44.187185    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:03:44.193352    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0816 10:03:44.201781    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0816 10:03:44.202595    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:03:44.202610    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:03:44.202637    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:03:44.202653    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:03:44.588441    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0816 10:03:44.588458    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0816 10:03:44.703100    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:03:44.703117    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:03:44.703125    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:03:44.703135    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:03:44.703952    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0816 10:03:44.703963    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0816 10:03:46.188345    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Attempt 1
	I0816 10:03:46.188360    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:46.188480    3758 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid from json: 3849
	I0816 10:03:46.189255    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Searching for 8a:e:de:5b:b5:8b in /var/db/dhcpd_leases ...
	I0816 10:03:46.189306    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0816 10:03:46.189321    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0d7b0}
	I0816 10:03:46.189335    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:03:46.189352    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:03:46.189359    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:03:46.189385    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:03:48.190823    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Attempt 2
	I0816 10:03:48.190838    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:48.190916    3758 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid from json: 3849
	I0816 10:03:48.191692    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Searching for 8a:e:de:5b:b5:8b in /var/db/dhcpd_leases ...
	I0816 10:03:48.191747    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0816 10:03:48.191757    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0d7b0}
	I0816 10:03:48.191779    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:03:48.191787    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:03:48.191803    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:03:48.191815    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:03:50.191897    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Attempt 3
	I0816 10:03:50.191913    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:50.191977    3758 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid from json: 3849
	I0816 10:03:50.192744    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Searching for 8a:e:de:5b:b5:8b in /var/db/dhcpd_leases ...
	I0816 10:03:50.192793    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0816 10:03:50.192802    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0d7b0}
	I0816 10:03:50.192811    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:03:50.192820    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:03:50.192836    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:03:50.192850    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:03:50.310705    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:50 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0816 10:03:50.310770    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:50 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0816 10:03:50.310783    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:50 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0816 10:03:50.334054    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:50 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0816 10:03:52.193443    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Attempt 4
	I0816 10:03:52.193459    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:52.193535    3758 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid from json: 3849
	I0816 10:03:52.194319    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Searching for 8a:e:de:5b:b5:8b in /var/db/dhcpd_leases ...
	I0816 10:03:52.194367    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0816 10:03:52.194379    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0d7b0}
	I0816 10:03:52.194390    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:03:52.194399    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:03:52.194408    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:03:52.194419    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:03:54.195434    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Attempt 5
	I0816 10:03:54.195456    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:54.195540    3758 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid from json: 3849
	I0816 10:03:54.196406    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Searching for 8a:e:de:5b:b5:8b in /var/db/dhcpd_leases ...
	I0816 10:03:54.196510    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Found 6 entries in /var/db/dhcpd_leases!
	I0816 10:03:54.196530    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66c0d7f9}
	I0816 10:03:54.196551    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Found match: 8a:e:de:5b:b5:8b
	I0816 10:03:54.196553    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetConfigRaw
	I0816 10:03:54.196565    3758 main.go:141] libmachine: (ha-286000-m03) DBG | IP: 192.169.0.7
	I0816 10:03:54.197236    3758 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:03:54.197351    3758 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:03:54.197441    3758 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0816 10:03:54.197454    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetState
	I0816 10:03:54.197541    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:54.197611    3758 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid from json: 3849
	I0816 10:03:54.198439    3758 main.go:141] libmachine: Detecting operating system of created instance...
	I0816 10:03:54.198446    3758 main.go:141] libmachine: Waiting for SSH to be available...
	I0816 10:03:54.198450    3758 main.go:141] libmachine: Getting to WaitForSSH function...
	I0816 10:03:54.198454    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:54.198532    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:54.198612    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:54.198695    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:54.198783    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:54.198892    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:03:54.199063    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:03:54.199071    3758 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0816 10:03:55.266165    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 10:03:55.266179    3758 main.go:141] libmachine: Detecting the provisioner...
	I0816 10:03:55.266185    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:55.266318    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:55.266413    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.266507    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.266601    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:55.266731    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:03:55.266879    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:03:55.266886    3758 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0816 10:03:55.332778    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0816 10:03:55.332822    3758 main.go:141] libmachine: found compatible host: buildroot
	I0816 10:03:55.332828    3758 main.go:141] libmachine: Provisioning with buildroot...
	I0816 10:03:55.332834    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetMachineName
	I0816 10:03:55.332971    3758 buildroot.go:166] provisioning hostname "ha-286000-m03"
	I0816 10:03:55.332982    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetMachineName
	I0816 10:03:55.333079    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:55.333186    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:55.333277    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.333375    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.333463    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:55.333593    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:03:55.333737    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:03:55.333746    3758 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-286000-m03 && echo "ha-286000-m03" | sudo tee /etc/hostname
	I0816 10:03:55.411701    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-286000-m03
	
	I0816 10:03:55.411715    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:55.411851    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:55.411965    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.412057    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.412140    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:55.412274    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:03:55.412418    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:03:55.412429    3758 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-286000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-286000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-286000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 10:03:55.485102    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 10:03:55.485118    3758 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19461-1276/.minikube CaCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19461-1276/.minikube}
	I0816 10:03:55.485127    3758 buildroot.go:174] setting up certificates
	I0816 10:03:55.485134    3758 provision.go:84] configureAuth start
	I0816 10:03:55.485141    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetMachineName
	I0816 10:03:55.485284    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetIP
	I0816 10:03:55.485375    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:55.485445    3758 provision.go:143] copyHostCerts
	I0816 10:03:55.485474    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem
	I0816 10:03:55.485527    3758 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem, removing ...
	I0816 10:03:55.485533    3758 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem
	I0816 10:03:55.485654    3758 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem (1123 bytes)
	I0816 10:03:55.485864    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem
	I0816 10:03:55.485895    3758 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem, removing ...
	I0816 10:03:55.485900    3758 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem
	I0816 10:03:55.486003    3758 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem (1679 bytes)
	I0816 10:03:55.486172    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem
	I0816 10:03:55.486206    3758 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem, removing ...
	I0816 10:03:55.486215    3758 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem
	I0816 10:03:55.486286    3758 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem (1078 bytes)
	I0816 10:03:55.486435    3758 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem org=jenkins.ha-286000-m03 san=[127.0.0.1 192.169.0.7 ha-286000-m03 localhost minikube]
	I0816 10:03:55.572803    3758 provision.go:177] copyRemoteCerts
	I0816 10:03:55.572855    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 10:03:55.572869    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:55.573015    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:55.573114    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.573208    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:55.573304    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/id_rsa Username:docker}
	I0816 10:03:55.612104    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0816 10:03:55.612186    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0816 10:03:55.632484    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0816 10:03:55.632548    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 10:03:55.652677    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0816 10:03:55.652752    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0816 10:03:55.672555    3758 provision.go:87] duration metric: took 187.4165ms to configureAuth
	I0816 10:03:55.672568    3758 buildroot.go:189] setting minikube options for container-runtime
	I0816 10:03:55.672735    3758 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:03:55.672748    3758 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:03:55.672889    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:55.672982    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:55.673071    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.673157    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.673245    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:55.673371    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:03:55.673496    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:03:55.673504    3758 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0816 10:03:55.738053    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0816 10:03:55.738064    3758 buildroot.go:70] root file system type: tmpfs
	I0816 10:03:55.738156    3758 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0816 10:03:55.738167    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:55.738306    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:55.738397    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.738489    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.738573    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:55.738694    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:03:55.738841    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:03:55.738890    3758 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0816 10:03:55.813774    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0816 10:03:55.813790    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:55.813924    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:55.814019    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.814103    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.814193    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:55.814320    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:03:55.814470    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:03:55.814484    3758 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0816 10:03:57.356529    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0816 10:03:57.356544    3758 main.go:141] libmachine: Checking connection to Docker...
	I0816 10:03:57.356549    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetURL
	I0816 10:03:57.356692    3758 main.go:141] libmachine: Docker is up and running!
	I0816 10:03:57.356700    3758 main.go:141] libmachine: Reticulating splines...
	I0816 10:03:57.356705    3758 client.go:171] duration metric: took 14.164443579s to LocalClient.Create
	I0816 10:03:57.356717    3758 start.go:167] duration metric: took 14.164482312s to libmachine.API.Create "ha-286000"
	I0816 10:03:57.356727    3758 start.go:293] postStartSetup for "ha-286000-m03" (driver="hyperkit")
	I0816 10:03:57.356734    3758 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 10:03:57.356744    3758 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:03:57.356919    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 10:03:57.356935    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:57.357030    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:57.357130    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:57.357273    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:57.357390    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/id_rsa Username:docker}
	I0816 10:03:57.398231    3758 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 10:03:57.402472    3758 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 10:03:57.402483    3758 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19461-1276/.minikube/addons for local assets ...
	I0816 10:03:57.402571    3758 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19461-1276/.minikube/files for local assets ...
	I0816 10:03:57.402723    3758 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> 18312.pem in /etc/ssl/certs
	I0816 10:03:57.402729    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> /etc/ssl/certs/18312.pem
	I0816 10:03:57.402895    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 10:03:57.412226    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem --> /etc/ssl/certs/18312.pem (1708 bytes)
	I0816 10:03:57.443242    3758 start.go:296] duration metric: took 86.507779ms for postStartSetup
	I0816 10:03:57.443268    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetConfigRaw
	I0816 10:03:57.443871    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetIP
	I0816 10:03:57.444031    3758 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:03:57.444386    3758 start.go:128] duration metric: took 14.284913269s to createHost
	I0816 10:03:57.444401    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:57.444490    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:57.444568    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:57.444658    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:57.444732    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:57.444842    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:03:57.444963    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:03:57.444971    3758 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 10:03:57.509634    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723827837.586061636
	
	I0816 10:03:57.509648    3758 fix.go:216] guest clock: 1723827837.586061636
	I0816 10:03:57.509653    3758 fix.go:229] Guest: 2024-08-16 10:03:57.586061636 -0700 PDT Remote: 2024-08-16 10:03:57.444395 -0700 PDT m=+129.370490857 (delta=141.666636ms)
	I0816 10:03:57.509665    3758 fix.go:200] guest clock delta is within tolerance: 141.666636ms
	I0816 10:03:57.509669    3758 start.go:83] releasing machines lock for "ha-286000-m03", held for 14.350339851s
	I0816 10:03:57.509691    3758 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:03:57.509832    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetIP
	I0816 10:03:57.542277    3758 out.go:177] * Found network options:
	I0816 10:03:57.562266    3758 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6
	W0816 10:03:57.583040    3758 proxy.go:119] fail to check proxy env: Error ip not in block
	W0816 10:03:57.583073    3758 proxy.go:119] fail to check proxy env: Error ip not in block
	I0816 10:03:57.583092    3758 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:03:57.583964    3758 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:03:57.584310    3758 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:03:57.584469    3758 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 10:03:57.584522    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	W0816 10:03:57.584570    3758 proxy.go:119] fail to check proxy env: Error ip not in block
	W0816 10:03:57.584596    3758 proxy.go:119] fail to check proxy env: Error ip not in block
	I0816 10:03:57.584708    3758 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0816 10:03:57.584735    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:57.584813    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:57.584995    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:57.585023    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:57.585164    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:57.585195    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:57.585338    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/id_rsa Username:docker}
	I0816 10:03:57.585406    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:57.585566    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/id_rsa Username:docker}
	W0816 10:03:57.623481    3758 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 10:03:57.623541    3758 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 10:03:57.670278    3758 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 10:03:57.670298    3758 start.go:495] detecting cgroup driver to use...
	I0816 10:03:57.670408    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 10:03:57.686134    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0816 10:03:57.694681    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0816 10:03:57.703134    3758 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0816 10:03:57.703198    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0816 10:03:57.711736    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 10:03:57.720041    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0816 10:03:57.728421    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 10:03:57.737252    3758 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 10:03:57.746356    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0816 10:03:57.755117    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0816 10:03:57.763962    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0816 10:03:57.772639    3758 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 10:03:57.780684    3758 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 10:03:57.788938    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:03:57.893932    3758 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0816 10:03:57.913150    3758 start.go:495] detecting cgroup driver to use...
	I0816 10:03:57.913224    3758 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0816 10:03:57.932274    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 10:03:57.945000    3758 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 10:03:57.965222    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 10:03:57.977460    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 10:03:57.988259    3758 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0816 10:03:58.006552    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 10:03:58.017261    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 10:03:58.032545    3758 ssh_runner.go:195] Run: which cri-dockerd
	I0816 10:03:58.035464    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0816 10:03:58.042742    3758 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0816 10:03:58.056747    3758 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0816 10:03:58.163471    3758 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0816 10:03:58.281020    3758 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0816 10:03:58.281044    3758 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0816 10:03:58.294992    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:03:58.399122    3758 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0816 10:04:59.415916    3758 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.017935847s)
	I0816 10:04:59.415981    3758 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0816 10:04:59.453653    3758 out.go:201] 
	W0816 10:04:59.474040    3758 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 16 17:03:56 ha-286000-m03 systemd[1]: Starting Docker Application Container Engine...
	Aug 16 17:03:56 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:56.200976866Z" level=info msg="Starting up"
	Aug 16 17:03:56 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:56.201455258Z" level=info msg="containerd not running, starting managed containerd"
	Aug 16 17:03:56 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:56.201908035Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=519
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.218394236Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.233514110Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.233584256Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.233654513Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.233690141Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.233768300Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.233806284Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.233955562Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.233996222Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.234026670Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.234054929Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.234137090Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.234382642Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.235934793Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.235985918Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.236117022Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.236159609Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.236256962Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.236327154Z" level=info msg="metadata content store policy set" policy=shared
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.238422470Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.238509436Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.238555940Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.238594343Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.238633954Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.238734892Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.238944917Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239083528Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239123172Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239153894Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239184820Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239214617Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239248728Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239285992Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239333696Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239368624Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239411950Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239451554Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239490333Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239523121Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239554681Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239589296Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239621390Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239653930Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239683266Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239712579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239742056Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239772828Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239801901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239830636Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239859570Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239893189Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239929555Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239960582Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239991787Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240076099Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240121809Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240153088Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240182438Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240210918Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240240067Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240268586Z" level=info msg="NRI interface is disabled by configuration."
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240520719Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240609661Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240710485Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240795617Z" level=info msg="containerd successfully booted in 0.023088s"
	Aug 16 17:03:57 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:57.221812848Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 16 17:03:57 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:57.228854621Z" level=info msg="Loading containers: start."
	Aug 16 17:03:57 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:57.312023793Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 16 17:03:57 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:57.390459780Z" level=info msg="Loading containers: done."
	Aug 16 17:03:57 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:57.404746652Z" level=info msg="Docker daemon" commit=f9522e5 containerd-snapshotter=false storage-driver=overlay2 version=27.1.2
	Aug 16 17:03:57 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:57.404864935Z" level=info msg="Daemon has completed initialization"
	Aug 16 17:03:57 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:57.430646618Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 16 17:03:57 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:57.430810646Z" level=info msg="API listen on [::]:2376"
	Aug 16 17:03:57 ha-286000-m03 systemd[1]: Started Docker Application Container Engine.
	Aug 16 17:03:58 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:58.488220464Z" level=info msg="Processing signal 'terminated'"
	Aug 16 17:03:58 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:58.489144852Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 16 17:03:58 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:58.489473398Z" level=info msg="Daemon shutdown complete"
	Aug 16 17:03:58 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:58.489515121Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 16 17:03:58 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:58.489527809Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 16 17:03:58 ha-286000-m03 systemd[1]: Stopping Docker Application Container Engine...
	Aug 16 17:03:59 ha-286000-m03 systemd[1]: docker.service: Deactivated successfully.
	Aug 16 17:03:59 ha-286000-m03 systemd[1]: Stopped Docker Application Container Engine.
	Aug 16 17:03:59 ha-286000-m03 systemd[1]: Starting Docker Application Container Engine...
	Aug 16 17:03:59 ha-286000-m03 dockerd[913]: time="2024-08-16T17:03:59.520198872Z" level=info msg="Starting up"
	Aug 16 17:04:59 ha-286000-m03 dockerd[913]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 16 17:04:59 ha-286000-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 16 17:04:59 ha-286000-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 16 17:04:59 ha-286000-m03 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 16 17:03:56 ha-286000-m03 systemd[1]: Starting Docker Application Container Engine...
	Aug 16 17:03:56 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:56.200976866Z" level=info msg="Starting up"
	Aug 16 17:03:56 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:56.201455258Z" level=info msg="containerd not running, starting managed containerd"
	Aug 16 17:03:56 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:56.201908035Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=519
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.218394236Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.233514110Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.233584256Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.233654513Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.233690141Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.233768300Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.233806284Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.233955562Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.233996222Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.234026670Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.234054929Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.234137090Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.234382642Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.235934793Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.235985918Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.236117022Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.236159609Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.236256962Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.236327154Z" level=info msg="metadata content store policy set" policy=shared
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.238422470Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.238509436Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.238555940Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.238594343Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.238633954Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.238734892Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.238944917Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239083528Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239123172Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239153894Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239184820Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239214617Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239248728Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239285992Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239333696Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239368624Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239411950Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239451554Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239490333Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239523121Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239554681Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239589296Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239621390Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239653930Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239683266Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239712579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239742056Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239772828Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239801901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239830636Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239859570Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239893189Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239929555Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239960582Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239991787Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240076099Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240121809Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240153088Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240182438Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240210918Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240240067Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240268586Z" level=info msg="NRI interface is disabled by configuration."
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240520719Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240609661Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240710485Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240795617Z" level=info msg="containerd successfully booted in 0.023088s"
	Aug 16 17:03:57 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:57.221812848Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 16 17:03:57 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:57.228854621Z" level=info msg="Loading containers: start."
	Aug 16 17:03:57 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:57.312023793Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 16 17:03:57 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:57.390459780Z" level=info msg="Loading containers: done."
	Aug 16 17:03:57 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:57.404746652Z" level=info msg="Docker daemon" commit=f9522e5 containerd-snapshotter=false storage-driver=overlay2 version=27.1.2
	Aug 16 17:03:57 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:57.404864935Z" level=info msg="Daemon has completed initialization"
	Aug 16 17:03:57 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:57.430646618Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 16 17:03:57 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:57.430810646Z" level=info msg="API listen on [::]:2376"
	Aug 16 17:03:57 ha-286000-m03 systemd[1]: Started Docker Application Container Engine.
	Aug 16 17:03:58 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:58.488220464Z" level=info msg="Processing signal 'terminated'"
	Aug 16 17:03:58 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:58.489144852Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 16 17:03:58 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:58.489473398Z" level=info msg="Daemon shutdown complete"
	Aug 16 17:03:58 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:58.489515121Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 16 17:03:58 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:58.489527809Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 16 17:03:58 ha-286000-m03 systemd[1]: Stopping Docker Application Container Engine...
	Aug 16 17:03:59 ha-286000-m03 systemd[1]: docker.service: Deactivated successfully.
	Aug 16 17:03:59 ha-286000-m03 systemd[1]: Stopped Docker Application Container Engine.
	Aug 16 17:03:59 ha-286000-m03 systemd[1]: Starting Docker Application Container Engine...
	Aug 16 17:03:59 ha-286000-m03 dockerd[913]: time="2024-08-16T17:03:59.520198872Z" level=info msg="Starting up"
	Aug 16 17:04:59 ha-286000-m03 dockerd[913]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 16 17:04:59 ha-286000-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 16 17:04:59 ha-286000-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 16 17:04:59 ha-286000-m03 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0816 10:04:59.474111    3758 out.go:270] * 
	* 
	W0816 10:04:59.474938    3758 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 10:04:59.558192    3758 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-darwin-amd64 start -p ha-286000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperkit " : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-286000 -n ha-286000
helpers_test.go:244: <<< TestMultiControlPlane/serial/StartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StartCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-286000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-286000 logs -n 25: (2.199251081s)
helpers_test.go:252: TestMultiControlPlane/serial/StartCluster logs: 
-- stdout --
	
	==> Audit <==
	|----------------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                                         Args                                                         |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| mount          | -p functional-373000                                                                                                 | functional-373000 | jenkins | v1.33.1 | 16 Aug 24 10:01 PDT |                     |
	|                | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdspecific-port4084863197/001:/mount-9p |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1 --port 46464                                                                                  |                   |         |         |                     |                     |
	| ssh            | functional-373000 ssh findmnt                                                                                        | functional-373000 | jenkins | v1.33.1 | 16 Aug 24 10:01 PDT | 16 Aug 24 10:01 PDT |
	|                | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh            | functional-373000 ssh -- ls                                                                                          | functional-373000 | jenkins | v1.33.1 | 16 Aug 24 10:01 PDT | 16 Aug 24 10:01 PDT |
	|                | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh            | functional-373000 ssh sudo                                                                                           | functional-373000 | jenkins | v1.33.1 | 16 Aug 24 10:01 PDT |                     |
	|                | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| mount          | -p functional-373000                                                                                                 | functional-373000 | jenkins | v1.33.1 | 16 Aug 24 10:01 PDT |                     |
	|                | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3825892010/001:/mount1   |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| mount          | -p functional-373000                                                                                                 | functional-373000 | jenkins | v1.33.1 | 16 Aug 24 10:01 PDT |                     |
	|                | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3825892010/001:/mount2   |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| mount          | -p functional-373000                                                                                                 | functional-373000 | jenkins | v1.33.1 | 16 Aug 24 10:01 PDT |                     |
	|                | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3825892010/001:/mount3   |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh            | functional-373000 ssh findmnt                                                                                        | functional-373000 | jenkins | v1.33.1 | 16 Aug 24 10:01 PDT |                     |
	|                | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh            | functional-373000 ssh findmnt                                                                                        | functional-373000 | jenkins | v1.33.1 | 16 Aug 24 10:01 PDT |                     |
	|                | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh            | functional-373000 ssh findmnt                                                                                        | functional-373000 | jenkins | v1.33.1 | 16 Aug 24 10:01 PDT | 16 Aug 24 10:01 PDT |
	|                | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh            | functional-373000 ssh findmnt                                                                                        | functional-373000 | jenkins | v1.33.1 | 16 Aug 24 10:01 PDT | 16 Aug 24 10:01 PDT |
	|                | -T /mount2                                                                                                           |                   |         |         |                     |                     |
	| ssh            | functional-373000 ssh findmnt                                                                                        | functional-373000 | jenkins | v1.33.1 | 16 Aug 24 10:01 PDT | 16 Aug 24 10:01 PDT |
	|                | -T /mount3                                                                                                           |                   |         |         |                     |                     |
	| mount          | -p functional-373000                                                                                                 | functional-373000 | jenkins | v1.33.1 | 16 Aug 24 10:01 PDT |                     |
	|                | --kill=true                                                                                                          |                   |         |         |                     |                     |
	| update-context | functional-373000                                                                                                    | functional-373000 | jenkins | v1.33.1 | 16 Aug 24 10:01 PDT | 16 Aug 24 10:01 PDT |
	|                | update-context                                                                                                       |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                                                               |                   |         |         |                     |                     |
	| update-context | functional-373000                                                                                                    | functional-373000 | jenkins | v1.33.1 | 16 Aug 24 10:01 PDT | 16 Aug 24 10:01 PDT |
	|                | update-context                                                                                                       |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                                                               |                   |         |         |                     |                     |
	| update-context | functional-373000                                                                                                    | functional-373000 | jenkins | v1.33.1 | 16 Aug 24 10:01 PDT | 16 Aug 24 10:01 PDT |
	|                | update-context                                                                                                       |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                                                               |                   |         |         |                     |                     |
	| image          | functional-373000                                                                                                    | functional-373000 | jenkins | v1.33.1 | 16 Aug 24 10:01 PDT | 16 Aug 24 10:01 PDT |
	|                | image ls --format short                                                                                              |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                                                                    |                   |         |         |                     |                     |
	| image          | functional-373000                                                                                                    | functional-373000 | jenkins | v1.33.1 | 16 Aug 24 10:01 PDT | 16 Aug 24 10:01 PDT |
	|                | image ls --format yaml                                                                                               |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                                                                    |                   |         |         |                     |                     |
	| ssh            | functional-373000 ssh pgrep                                                                                          | functional-373000 | jenkins | v1.33.1 | 16 Aug 24 10:01 PDT |                     |
	|                | buildkitd                                                                                                            |                   |         |         |                     |                     |
	| image          | functional-373000 image build -t                                                                                     | functional-373000 | jenkins | v1.33.1 | 16 Aug 24 10:01 PDT | 16 Aug 24 10:01 PDT |
	|                | localhost/my-image:functional-373000                                                                                 |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                                                                     |                   |         |         |                     |                     |
	| image          | functional-373000 image ls                                                                                           | functional-373000 | jenkins | v1.33.1 | 16 Aug 24 10:01 PDT | 16 Aug 24 10:01 PDT |
	| image          | functional-373000                                                                                                    | functional-373000 | jenkins | v1.33.1 | 16 Aug 24 10:01 PDT | 16 Aug 24 10:01 PDT |
	|                | image ls --format json                                                                                               |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                                                                    |                   |         |         |                     |                     |
	| image          | functional-373000                                                                                                    | functional-373000 | jenkins | v1.33.1 | 16 Aug 24 10:01 PDT | 16 Aug 24 10:01 PDT |
	|                | image ls --format table                                                                                              |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                                                                    |                   |         |         |                     |                     |
	| delete         | -p functional-373000                                                                                                 | functional-373000 | jenkins | v1.33.1 | 16 Aug 24 10:01 PDT | 16 Aug 24 10:01 PDT |
	| start          | -p ha-286000 --wait=true                                                                                             | ha-286000         | jenkins | v1.33.1 | 16 Aug 24 10:01 PDT |                     |
	|                | --memory=2200 --ha                                                                                                   |                   |         |         |                     |                     |
	|                | -v=7 --alsologtostderr                                                                                               |                   |         |         |                     |                     |
	|                | --driver=hyperkit                                                                                                    |                   |         |         |                     |                     |
	|----------------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 10:01:48
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 10:01:48.113296    3758 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:01:48.113462    3758 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:01:48.113467    3758 out.go:358] Setting ErrFile to fd 2...
	I0816 10:01:48.113470    3758 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:01:48.113648    3758 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1276/.minikube/bin
	I0816 10:01:48.115192    3758 out.go:352] Setting JSON to false
	I0816 10:01:48.140204    3758 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":1878,"bootTime":1723825830,"procs":433,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0816 10:01:48.140316    3758 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 10:01:48.194498    3758 out.go:177] * [ha-286000] minikube v1.33.1 on Darwin 14.6.1
	I0816 10:01:48.236650    3758 notify.go:220] Checking for updates...
	I0816 10:01:48.262360    3758 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 10:01:48.320415    3758 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1276/kubeconfig
	I0816 10:01:48.381368    3758 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0816 10:01:48.452505    3758 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 10:01:48.474398    3758 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1276/.minikube
	I0816 10:01:48.495459    3758 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 10:01:48.520120    3758 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 10:01:48.550379    3758 out.go:177] * Using the hyperkit driver based on user configuration
	I0816 10:01:48.592331    3758 start.go:297] selected driver: hyperkit
	I0816 10:01:48.592358    3758 start.go:901] validating driver "hyperkit" against <nil>
	I0816 10:01:48.592382    3758 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 10:01:48.596757    3758 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:01:48.596907    3758 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19461-1276/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0816 10:01:48.605545    3758 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0816 10:01:48.609748    3758 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:01:48.609772    3758 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0816 10:01:48.609805    3758 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 10:01:48.610046    3758 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 10:01:48.610094    3758 cni.go:84] Creating CNI manager for ""
	I0816 10:01:48.610103    3758 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0816 10:01:48.610108    3758 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0816 10:01:48.610236    3758 start.go:340] cluster config:
	{Name:ha-286000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-286000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docke
r CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0816 10:01:48.610323    3758 iso.go:125] acquiring lock: {Name:mkb430b43b5e7a8a683454482519837ed996c78d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:01:48.632334    3758 out.go:177] * Starting "ha-286000" primary control-plane node in "ha-286000" cluster
	I0816 10:01:48.653456    3758 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 10:01:48.653537    3758 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0816 10:01:48.653563    3758 cache.go:56] Caching tarball of preloaded images
	I0816 10:01:48.653777    3758 preload.go:172] Found /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0816 10:01:48.653813    3758 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 10:01:48.654332    3758 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:01:48.654370    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json: {Name:mk79fdae2e45cdfc987caaf16c5f211150e90dc5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:01:48.654995    3758 start.go:360] acquireMachinesLock for ha-286000: {Name:mk0510f0432b1e24fd07d899155e85dff8756d5a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:01:48.655106    3758 start.go:364] duration metric: took 87.458µs to acquireMachinesLock for "ha-286000"
	I0816 10:01:48.655154    3758 start.go:93] Provisioning new machine with config: &{Name:ha-286000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:ha-286000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:01:48.655246    3758 start.go:125] createHost starting for "" (driver="hyperkit")
	I0816 10:01:48.677450    3758 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0816 10:01:48.677701    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:01:48.677786    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:01:48.687593    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51004
	I0816 10:01:48.687935    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:01:48.688347    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:01:48.688357    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:01:48.688562    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:01:48.688668    3758 main.go:141] libmachine: (ha-286000) Calling .GetMachineName
	I0816 10:01:48.688765    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:01:48.688895    3758 start.go:159] libmachine.API.Create for "ha-286000" (driver="hyperkit")
	I0816 10:01:48.688916    3758 client.go:168] LocalClient.Create starting
	I0816 10:01:48.688948    3758 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem
	I0816 10:01:48.688999    3758 main.go:141] libmachine: Decoding PEM data...
	I0816 10:01:48.689012    3758 main.go:141] libmachine: Parsing certificate...
	I0816 10:01:48.689063    3758 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem
	I0816 10:01:48.689100    3758 main.go:141] libmachine: Decoding PEM data...
	I0816 10:01:48.689111    3758 main.go:141] libmachine: Parsing certificate...
	I0816 10:01:48.689128    3758 main.go:141] libmachine: Running pre-create checks...
	I0816 10:01:48.689135    3758 main.go:141] libmachine: (ha-286000) Calling .PreCreateCheck
	I0816 10:01:48.689224    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:48.689427    3758 main.go:141] libmachine: (ha-286000) Calling .GetConfigRaw
	I0816 10:01:48.698771    3758 main.go:141] libmachine: Creating machine...
	I0816 10:01:48.698794    3758 main.go:141] libmachine: (ha-286000) Calling .Create
	I0816 10:01:48.699018    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:48.699296    3758 main.go:141] libmachine: (ha-286000) DBG | I0816 10:01:48.699015    3766 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19461-1276/.minikube
	I0816 10:01:48.699450    3758 main.go:141] libmachine: (ha-286000) Downloading /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19461-1276/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0816 10:01:48.884950    3758 main.go:141] libmachine: (ha-286000) DBG | I0816 10:01:48.884833    3766 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa...
	I0816 10:01:48.986348    3758 main.go:141] libmachine: (ha-286000) DBG | I0816 10:01:48.986275    3766 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/ha-286000.rawdisk...
	I0816 10:01:48.986388    3758 main.go:141] libmachine: (ha-286000) DBG | Writing magic tar header
	I0816 10:01:48.986396    3758 main.go:141] libmachine: (ha-286000) DBG | Writing SSH key tar header
	I0816 10:01:48.987038    3758 main.go:141] libmachine: (ha-286000) DBG | I0816 10:01:48.987004    3766 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000 ...
	I0816 10:01:49.360700    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:49.360726    3758 main.go:141] libmachine: (ha-286000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/hyperkit.pid
	I0816 10:01:49.360741    3758 main.go:141] libmachine: (ha-286000) DBG | Using UUID ad96de67-e238-408c-89eb-d74e5b68d297
	I0816 10:01:49.471852    3758 main.go:141] libmachine: (ha-286000) DBG | Generated MAC 66:c8:48:4e:12:1b
	I0816 10:01:49.471873    3758 main.go:141] libmachine: (ha-286000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000
	I0816 10:01:49.471908    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"ad96de67-e238-408c-89eb-d74e5b68d297", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0000b2330)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 10:01:49.471951    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"ad96de67-e238-408c-89eb-d74e5b68d297", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0000b2330)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 10:01:49.471996    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "ad96de67-e238-408c-89eb-d74e5b68d297", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/ha-286000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/bzimage,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000"}
	I0816 10:01:49.472043    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U ad96de67-e238-408c-89eb-d74e5b68d297 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/ha-286000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/console-ring -f kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/bzimage,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000"
	I0816 10:01:49.472063    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0816 10:01:49.475179    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 DEBUG: hyperkit: Pid is 3771
	I0816 10:01:49.475583    3758 main.go:141] libmachine: (ha-286000) DBG | Attempt 0
	I0816 10:01:49.475597    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:49.475674    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:01:49.476510    3758 main.go:141] libmachine: (ha-286000) DBG | Searching for 66:c8:48:4e:12:1b in /var/db/dhcpd_leases ...
	I0816 10:01:49.476583    3758 main.go:141] libmachine: (ha-286000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0816 10:01:49.476592    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:01:49.476602    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:01:49.476612    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:01:49.482826    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0816 10:01:49.534430    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0816 10:01:49.535040    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:01:49.535056    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:01:49.535064    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:01:49.535072    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:01:49.909510    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0816 10:01:49.909527    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0816 10:01:50.024439    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:01:50.024459    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:01:50.024479    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:01:50.024494    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:01:50.025363    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:50 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0816 10:01:50.025375    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:50 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0816 10:01:51.477573    3758 main.go:141] libmachine: (ha-286000) DBG | Attempt 1
	I0816 10:01:51.477587    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:51.477660    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:01:51.478481    3758 main.go:141] libmachine: (ha-286000) DBG | Searching for 66:c8:48:4e:12:1b in /var/db/dhcpd_leases ...
	I0816 10:01:51.478504    3758 main.go:141] libmachine: (ha-286000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0816 10:01:51.478511    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:01:51.478520    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:01:51.478531    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:01:53.479582    3758 main.go:141] libmachine: (ha-286000) DBG | Attempt 2
	I0816 10:01:53.479599    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:53.479760    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:01:53.480630    3758 main.go:141] libmachine: (ha-286000) DBG | Searching for 66:c8:48:4e:12:1b in /var/db/dhcpd_leases ...
	I0816 10:01:53.480678    3758 main.go:141] libmachine: (ha-286000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0816 10:01:53.480688    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:01:53.480697    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:01:53.480710    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:01:55.482614    3758 main.go:141] libmachine: (ha-286000) DBG | Attempt 3
	I0816 10:01:55.482630    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:55.482703    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:01:55.483616    3758 main.go:141] libmachine: (ha-286000) DBG | Searching for 66:c8:48:4e:12:1b in /var/db/dhcpd_leases ...
	I0816 10:01:55.483662    3758 main.go:141] libmachine: (ha-286000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0816 10:01:55.483672    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:01:55.483679    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:01:55.483687    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:01:55.581983    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:55 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0816 10:01:55.582063    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:55 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0816 10:01:55.582075    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:55 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0816 10:01:55.606414    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:55 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0816 10:01:57.484306    3758 main.go:141] libmachine: (ha-286000) DBG | Attempt 4
	I0816 10:01:57.484322    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:57.484414    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:01:57.485196    3758 main.go:141] libmachine: (ha-286000) DBG | Searching for 66:c8:48:4e:12:1b in /var/db/dhcpd_leases ...
	I0816 10:01:57.485261    3758 main.go:141] libmachine: (ha-286000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0816 10:01:57.485273    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:01:57.485286    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:01:57.485294    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:01:59.485978    3758 main.go:141] libmachine: (ha-286000) DBG | Attempt 5
	I0816 10:01:59.486008    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:59.486192    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:01:59.487610    3758 main.go:141] libmachine: (ha-286000) DBG | Searching for 66:c8:48:4e:12:1b in /var/db/dhcpd_leases ...
	I0816 10:01:59.487709    3758 main.go:141] libmachine: (ha-286000) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0816 10:01:59.487730    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:01:59.487784    3758 main.go:141] libmachine: (ha-286000) DBG | Found match: 66:c8:48:4e:12:1b
	I0816 10:01:59.487797    3758 main.go:141] libmachine: (ha-286000) DBG | IP: 192.169.0.5
	I0816 10:01:59.487831    3758 main.go:141] libmachine: (ha-286000) Calling .GetConfigRaw
	I0816 10:01:59.488860    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:01:59.489069    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:01:59.489276    3758 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0816 10:01:59.489289    3758 main.go:141] libmachine: (ha-286000) Calling .GetState
	I0816 10:01:59.489463    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:59.489567    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:01:59.490615    3758 main.go:141] libmachine: Detecting operating system of created instance...
	I0816 10:01:59.490628    3758 main.go:141] libmachine: Waiting for SSH to be available...
	I0816 10:01:59.490644    3758 main.go:141] libmachine: Getting to WaitForSSH function...
	I0816 10:01:59.490652    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:01:59.490782    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:01:59.490923    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:01:59.491055    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:01:59.491190    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:01:59.491362    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:01:59.491595    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:01:59.491605    3758 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0816 10:01:59.511220    3758 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0816 10:02:02.573095    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 10:02:02.573108    3758 main.go:141] libmachine: Detecting the provisioner...
	I0816 10:02:02.573113    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:02.573255    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:02.573385    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:02.573489    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:02.573602    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:02.573753    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:02.573906    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:02:02.573914    3758 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0816 10:02:02.632510    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0816 10:02:02.632561    3758 main.go:141] libmachine: found compatible host: buildroot
	I0816 10:02:02.632568    3758 main.go:141] libmachine: Provisioning with buildroot...
	I0816 10:02:02.632573    3758 main.go:141] libmachine: (ha-286000) Calling .GetMachineName
	I0816 10:02:02.632721    3758 buildroot.go:166] provisioning hostname "ha-286000"
	I0816 10:02:02.632732    3758 main.go:141] libmachine: (ha-286000) Calling .GetMachineName
	I0816 10:02:02.632834    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:02.632956    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:02.633046    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:02.633122    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:02.633236    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:02.633363    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:02.633492    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:02:02.633500    3758 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-286000 && echo "ha-286000" | sudo tee /etc/hostname
	I0816 10:02:02.702336    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-286000
	
	I0816 10:02:02.702354    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:02.702482    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:02.702569    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:02.702670    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:02.702756    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:02.702893    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:02.703040    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:02:02.703052    3758 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-286000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-286000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-286000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 10:02:02.766997    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 10:02:02.767016    3758 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19461-1276/.minikube CaCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19461-1276/.minikube}
	I0816 10:02:02.767026    3758 buildroot.go:174] setting up certificates
	I0816 10:02:02.767038    3758 provision.go:84] configureAuth start
	I0816 10:02:02.767044    3758 main.go:141] libmachine: (ha-286000) Calling .GetMachineName
	I0816 10:02:02.767175    3758 main.go:141] libmachine: (ha-286000) Calling .GetIP
	I0816 10:02:02.767270    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:02.767372    3758 provision.go:143] copyHostCerts
	I0816 10:02:02.767406    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem
	I0816 10:02:02.767476    3758 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem, removing ...
	I0816 10:02:02.767485    3758 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem
	I0816 10:02:02.767630    3758 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem (1679 bytes)
	I0816 10:02:02.767837    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem
	I0816 10:02:02.767868    3758 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem, removing ...
	I0816 10:02:02.767873    3758 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem
	I0816 10:02:02.767991    3758 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem (1078 bytes)
	I0816 10:02:02.768146    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem
	I0816 10:02:02.768179    3758 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem, removing ...
	I0816 10:02:02.768184    3758 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem
	I0816 10:02:02.768268    3758 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem (1123 bytes)
	I0816 10:02:02.768410    3758 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem org=jenkins.ha-286000 san=[127.0.0.1 192.169.0.5 ha-286000 localhost minikube]
	I0816 10:02:02.915225    3758 provision.go:177] copyRemoteCerts
	I0816 10:02:02.915279    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 10:02:02.915299    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:02.915444    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:02.915558    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:02.915640    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:02.915723    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:02:02.953069    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0816 10:02:02.953145    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0816 10:02:02.973516    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0816 10:02:02.973600    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0816 10:02:02.993038    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0816 10:02:02.993100    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 10:02:03.013565    3758 provision.go:87] duration metric: took 246.514857ms to configureAuth
	I0816 10:02:03.013581    3758 buildroot.go:189] setting minikube options for container-runtime
	I0816 10:02:03.013724    3758 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:02:03.013737    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:03.013889    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:03.013978    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:03.014067    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:03.014147    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:03.014250    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:03.014369    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:03.014498    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:02:03.014506    3758 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0816 10:02:03.073645    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0816 10:02:03.073660    3758 buildroot.go:70] root file system type: tmpfs
	I0816 10:02:03.073734    3758 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0816 10:02:03.073745    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:03.073872    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:03.073966    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:03.074063    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:03.074164    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:03.074292    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:03.074429    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:02:03.074479    3758 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0816 10:02:03.148891    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0816 10:02:03.148912    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:03.149052    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:03.149138    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:03.149235    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:03.149324    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:03.149466    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:03.149604    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:02:03.149616    3758 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0816 10:02:04.780498    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0816 10:02:04.780519    3758 main.go:141] libmachine: Checking connection to Docker...
	I0816 10:02:04.780526    3758 main.go:141] libmachine: (ha-286000) Calling .GetURL
	I0816 10:02:04.780691    3758 main.go:141] libmachine: Docker is up and running!
	I0816 10:02:04.780698    3758 main.go:141] libmachine: Reticulating splines...
	I0816 10:02:04.780702    3758 client.go:171] duration metric: took 16.091876944s to LocalClient.Create
	I0816 10:02:04.780713    3758 start.go:167] duration metric: took 16.091916939s to libmachine.API.Create "ha-286000"
	I0816 10:02:04.780722    3758 start.go:293] postStartSetup for "ha-286000" (driver="hyperkit")
	I0816 10:02:04.780729    3758 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 10:02:04.780738    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:04.780873    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 10:02:04.780884    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:04.780956    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:04.781047    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:04.781154    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:04.781275    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:02:04.819650    3758 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 10:02:04.823314    3758 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 10:02:04.823335    3758 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19461-1276/.minikube/addons for local assets ...
	I0816 10:02:04.823443    3758 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19461-1276/.minikube/files for local assets ...
	I0816 10:02:04.823624    3758 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> 18312.pem in /etc/ssl/certs
	I0816 10:02:04.823631    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> /etc/ssl/certs/18312.pem
	I0816 10:02:04.823837    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 10:02:04.831860    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem --> /etc/ssl/certs/18312.pem (1708 bytes)
	I0816 10:02:04.850901    3758 start.go:296] duration metric: took 70.172878ms for postStartSetup
	I0816 10:02:04.850935    3758 main.go:141] libmachine: (ha-286000) Calling .GetConfigRaw
	I0816 10:02:04.851561    3758 main.go:141] libmachine: (ha-286000) Calling .GetIP
	I0816 10:02:04.851702    3758 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:02:04.852053    3758 start.go:128] duration metric: took 16.196888252s to createHost
	I0816 10:02:04.852071    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:04.852167    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:04.852261    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:04.852344    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:04.852420    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:04.852525    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:04.852648    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:02:04.852655    3758 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 10:02:04.911299    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723827724.986438448
	
	I0816 10:02:04.911317    3758 fix.go:216] guest clock: 1723827724.986438448
	I0816 10:02:04.911322    3758 fix.go:229] Guest: 2024-08-16 10:02:04.986438448 -0700 PDT Remote: 2024-08-16 10:02:04.852061 -0700 PDT m=+16.776125584 (delta=134.377448ms)
	I0816 10:02:04.911341    3758 fix.go:200] guest clock delta is within tolerance: 134.377448ms
	I0816 10:02:04.911345    3758 start.go:83] releasing machines lock for "ha-286000", held for 16.256325696s
	I0816 10:02:04.911364    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:04.911498    3758 main.go:141] libmachine: (ha-286000) Calling .GetIP
	I0816 10:02:04.911592    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:04.911942    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:04.912068    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:04.912144    3758 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 10:02:04.912171    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:04.912218    3758 ssh_runner.go:195] Run: cat /version.json
	I0816 10:02:04.912231    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:04.912262    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:04.912305    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:04.912360    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:04.912384    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:04.912437    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:04.912464    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:04.912547    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:02:04.912567    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:02:04.945688    3758 ssh_runner.go:195] Run: systemctl --version
	I0816 10:02:04.997158    3758 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 10:02:05.002278    3758 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 10:02:05.002315    3758 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 10:02:05.015089    3758 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 10:02:05.015098    3758 start.go:495] detecting cgroup driver to use...
	I0816 10:02:05.015195    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 10:02:05.030338    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0816 10:02:05.038588    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0816 10:02:05.046898    3758 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0816 10:02:05.046932    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0816 10:02:05.055211    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 10:02:05.063533    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0816 10:02:05.073244    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 10:02:05.081895    3758 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 10:02:05.090915    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0816 10:02:05.099773    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0816 10:02:05.108719    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0816 10:02:05.117772    3758 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 10:02:05.125574    3758 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 10:02:05.145403    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:05.261568    3758 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0816 10:02:05.278920    3758 start.go:495] detecting cgroup driver to use...
	I0816 10:02:05.278996    3758 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0816 10:02:05.293925    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 10:02:05.306704    3758 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 10:02:05.320000    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 10:02:05.330230    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 10:02:05.340255    3758 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0816 10:02:05.369098    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 10:02:05.379374    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 10:02:05.395120    3758 ssh_runner.go:195] Run: which cri-dockerd
	I0816 10:02:05.397921    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0816 10:02:05.404866    3758 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0816 10:02:05.417935    3758 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0816 10:02:05.514793    3758 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0816 10:02:05.626208    3758 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0816 10:02:05.626292    3758 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0816 10:02:05.640317    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:05.737978    3758 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0816 10:02:08.105430    3758 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.367458496s)
	I0816 10:02:08.105491    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0816 10:02:08.115970    3758 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0816 10:02:08.129325    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0816 10:02:08.140312    3758 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0816 10:02:08.237628    3758 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0816 10:02:08.340285    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:08.436168    3758 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0816 10:02:08.457808    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0816 10:02:08.468314    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:08.561830    3758 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0816 10:02:08.620080    3758 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0816 10:02:08.620164    3758 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0816 10:02:08.624716    3758 start.go:563] Will wait 60s for crictl version
	I0816 10:02:08.624764    3758 ssh_runner.go:195] Run: which crictl
	I0816 10:02:08.627806    3758 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 10:02:08.660437    3758 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0816 10:02:08.660505    3758 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0816 10:02:08.678073    3758 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0816 10:02:08.722165    3758 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0816 10:02:08.722217    3758 main.go:141] libmachine: (ha-286000) Calling .GetIP
	I0816 10:02:08.722605    3758 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0816 10:02:08.727140    3758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 10:02:08.736769    3758 kubeadm.go:883] updating cluster {Name:ha-286000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:ha-286000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 10:02:08.736830    3758 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 10:02:08.736887    3758 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0816 10:02:08.748223    3758 docker.go:685] Got preloaded images: 
	I0816 10:02:08.748236    3758 docker.go:691] registry.k8s.io/kube-apiserver:v1.31.0 wasn't preloaded
	I0816 10:02:08.748294    3758 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0816 10:02:08.755601    3758 ssh_runner.go:195] Run: which lz4
	I0816 10:02:08.758510    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0816 10:02:08.758630    3758 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 10:02:08.761594    3758 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 10:02:08.761610    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (342554258 bytes)
	I0816 10:02:09.771496    3758 docker.go:649] duration metric: took 1.012922149s to copy over tarball
	I0816 10:02:09.771559    3758 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 10:02:12.050040    3758 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.27849665s)
	I0816 10:02:12.050054    3758 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 10:02:12.075438    3758 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0816 10:02:12.083331    3758 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0816 10:02:12.097261    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:12.204348    3758 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0816 10:02:14.516272    3758 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.311936705s)
	I0816 10:02:14.516363    3758 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0816 10:02:14.529500    3758 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0816 10:02:14.529519    3758 cache_images.go:84] Images are preloaded, skipping loading
	I0816 10:02:14.529530    3758 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.0 docker true true} ...
	I0816 10:02:14.529607    3758 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-286000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-286000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 10:02:14.529674    3758 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0816 10:02:14.568093    3758 cni.go:84] Creating CNI manager for ""
	I0816 10:02:14.568107    3758 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0816 10:02:14.568120    3758 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 10:02:14.568134    3758 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-286000 NodeName:ha-286000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 10:02:14.568222    3758 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-286000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 10:02:14.568237    3758 kube-vip.go:115] generating kube-vip config ...
	I0816 10:02:14.568286    3758 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0816 10:02:14.580706    3758 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0816 10:02:14.580778    3758 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0816 10:02:14.580832    3758 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 10:02:14.588124    3758 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 10:02:14.588174    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0816 10:02:14.595283    3758 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0816 10:02:14.608721    3758 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 10:02:14.622036    3758 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0816 10:02:14.635525    3758 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1446 bytes)
	I0816 10:02:14.648868    3758 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0816 10:02:14.651748    3758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 10:02:14.661305    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:14.752561    3758 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 10:02:14.768967    3758 certs.go:68] Setting up /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000 for IP: 192.169.0.5
	I0816 10:02:14.768980    3758 certs.go:194] generating shared ca certs ...
	I0816 10:02:14.768991    3758 certs.go:226] acquiring lock for ca certs: {Name:mka549fbc467849834d2267da204028c49d88a3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:14.769183    3758 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key
	I0816 10:02:14.769258    3758 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key
	I0816 10:02:14.769269    3758 certs.go:256] generating profile certs ...
	I0816 10:02:14.769315    3758 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.key
	I0816 10:02:14.769328    3758 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.crt with IP's: []
	I0816 10:02:14.849573    3758 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.crt ...
	I0816 10:02:14.849589    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.crt: {Name:mk9c6a2b5871e8d33f12e0a58268b028ea37ab4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:14.849911    3758 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.key ...
	I0816 10:02:14.849919    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.key: {Name:mk7d8ed264e583d7ced6b16b60c72c11b15a1f22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:14.850137    3758 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.af99fd6a
	I0816 10:02:14.850151    3758 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.af99fd6a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.254]
	I0816 10:02:14.968129    3758 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.af99fd6a ...
	I0816 10:02:14.968143    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.af99fd6a: {Name:mk3b8945a16852dca8274ec1ab3a9f2eade80831 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:14.968464    3758 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.af99fd6a ...
	I0816 10:02:14.968473    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.af99fd6a: {Name:mk90fcce3348a214c4733fe5a1fb806faff7773f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:14.968717    3758 certs.go:381] copying /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.af99fd6a -> /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt
	I0816 10:02:14.968940    3758 certs.go:385] copying /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.af99fd6a -> /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key
	I0816 10:02:14.969171    3758 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key
	I0816 10:02:14.969185    3758 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.crt with IP's: []
	I0816 10:02:15.029329    3758 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.crt ...
	I0816 10:02:15.029338    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.crt: {Name:mk70aaa3903fb58df2124d779dbea07aa95769f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:15.029604    3758 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key ...
	I0816 10:02:15.029611    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key: {Name:mka3a85354927e8da31f9dfc67f92dea3c77ca65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:15.029850    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0816 10:02:15.029876    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0816 10:02:15.029898    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0816 10:02:15.029940    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0816 10:02:15.029958    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0816 10:02:15.029990    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0816 10:02:15.030008    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0816 10:02:15.030025    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0816 10:02:15.030118    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem (1338 bytes)
	W0816 10:02:15.030163    3758 certs.go:480] ignoring /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831_empty.pem, impossibly tiny 0 bytes
	I0816 10:02:15.030171    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 10:02:15.030240    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem (1078 bytes)
	I0816 10:02:15.030268    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem (1123 bytes)
	I0816 10:02:15.030354    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem (1679 bytes)
	I0816 10:02:15.030467    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem (1708 bytes)
	I0816 10:02:15.030499    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:02:15.030525    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem -> /usr/share/ca-certificates/1831.pem
	I0816 10:02:15.030542    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> /usr/share/ca-certificates/18312.pem
	I0816 10:02:15.031030    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 10:02:15.051618    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0816 10:02:15.071318    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 10:02:15.091017    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 10:02:15.110497    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0816 10:02:15.131033    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 10:02:15.150747    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 10:02:15.170186    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 10:02:15.189808    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 10:02:15.209868    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem --> /usr/share/ca-certificates/1831.pem (1338 bytes)
	I0816 10:02:15.229378    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem --> /usr/share/ca-certificates/18312.pem (1708 bytes)
	I0816 10:02:15.248667    3758 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 10:02:15.262035    3758 ssh_runner.go:195] Run: openssl version
	I0816 10:02:15.266135    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 10:02:15.275959    3758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:02:15.279367    3758 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 16:48 /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:02:15.279403    3758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:02:15.283611    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 10:02:15.291872    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1831.pem && ln -fs /usr/share/ca-certificates/1831.pem /etc/ssl/certs/1831.pem"
	I0816 10:02:15.299890    3758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1831.pem
	I0816 10:02:15.303179    3758 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 16:57 /usr/share/ca-certificates/1831.pem
	I0816 10:02:15.303212    3758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1831.pem
	I0816 10:02:15.307423    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1831.pem /etc/ssl/certs/51391683.0"
	I0816 10:02:15.315741    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18312.pem && ln -fs /usr/share/ca-certificates/18312.pem /etc/ssl/certs/18312.pem"
	I0816 10:02:15.324088    3758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18312.pem
	I0816 10:02:15.327441    3758 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 16:57 /usr/share/ca-certificates/18312.pem
	I0816 10:02:15.327479    3758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18312.pem
	I0816 10:02:15.331723    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18312.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 10:02:15.339921    3758 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 10:02:15.343061    3758 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0816 10:02:15.343109    3758 kubeadm.go:392] StartCluster: {Name:ha-286000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:ha-286000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 10:02:15.343194    3758 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0816 10:02:15.356016    3758 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 10:02:15.363405    3758 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 10:02:15.370635    3758 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 10:02:15.377806    3758 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 10:02:15.377819    3758 kubeadm.go:157] found existing configuration files:
	
	I0816 10:02:15.377855    3758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 10:02:15.384868    3758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 10:02:15.384903    3758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 10:02:15.392001    3758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 10:02:15.398935    3758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 10:02:15.398971    3758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 10:02:15.406181    3758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 10:02:15.413108    3758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 10:02:15.413142    3758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 10:02:15.422603    3758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 10:02:15.435692    3758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 10:02:15.435749    3758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 10:02:15.450920    3758 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 10:02:15.521417    3758 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0816 10:02:15.521501    3758 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 10:02:15.590843    3758 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 10:02:15.590923    3758 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 10:02:15.591008    3758 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0816 10:02:15.600874    3758 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 10:02:15.632377    3758 out.go:235]   - Generating certificates and keys ...
	I0816 10:02:15.632427    3758 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 10:02:15.632475    3758 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 10:02:15.791885    3758 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0816 10:02:15.852803    3758 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0816 10:02:15.961982    3758 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0816 10:02:16.146557    3758 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0816 10:02:16.493401    3758 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0816 10:02:16.493556    3758 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-286000 localhost] and IPs [192.169.0.5 127.0.0.1 ::1]
	I0816 10:02:16.704832    3758 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0816 10:02:16.705108    3758 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-286000 localhost] and IPs [192.169.0.5 127.0.0.1 ::1]
	I0816 10:02:16.922818    3758 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0816 10:02:17.082810    3758 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0816 10:02:17.393939    3758 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0816 10:02:17.394070    3758 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 10:02:17.465159    3758 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 10:02:17.731984    3758 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0816 10:02:18.019833    3758 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 10:02:18.164168    3758 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 10:02:18.273756    3758 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 10:02:18.274215    3758 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 10:02:18.275949    3758 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 10:02:18.300490    3758 out.go:235]   - Booting up control plane ...
	I0816 10:02:18.300569    3758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 10:02:18.300638    3758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 10:02:18.300693    3758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 10:02:18.300780    3758 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 10:02:18.300850    3758 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 10:02:18.300879    3758 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 10:02:18.405245    3758 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0816 10:02:18.405330    3758 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0816 10:02:18.909805    3758 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 504.760374ms
	I0816 10:02:18.909928    3758 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0816 10:02:24.844637    3758 kubeadm.go:310] [api-check] The API server is healthy after 5.938882642s
	I0816 10:02:24.854864    3758 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0816 10:02:24.862134    3758 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0816 10:02:24.890940    3758 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0816 10:02:24.891101    3758 kubeadm.go:310] [mark-control-plane] Marking the node ha-286000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0816 10:02:24.901441    3758 kubeadm.go:310] [bootstrap-token] Using token: 73merd.1elxantqs1p5mkiz
	I0816 10:02:24.939545    3758 out.go:235]   - Configuring RBAC rules ...
	I0816 10:02:24.939737    3758 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0816 10:02:24.944670    3758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0816 10:02:24.951393    3758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0816 10:02:24.953383    3758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0816 10:02:24.955899    3758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0816 10:02:24.958387    3758 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0816 10:02:25.251799    3758 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0816 10:02:25.676108    3758 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0816 10:02:26.249233    3758 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0816 10:02:26.249936    3758 kubeadm.go:310] 
	I0816 10:02:26.250001    3758 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0816 10:02:26.250014    3758 kubeadm.go:310] 
	I0816 10:02:26.250090    3758 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0816 10:02:26.250102    3758 kubeadm.go:310] 
	I0816 10:02:26.250122    3758 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0816 10:02:26.250175    3758 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0816 10:02:26.250221    3758 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0816 10:02:26.250229    3758 kubeadm.go:310] 
	I0816 10:02:26.250268    3758 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0816 10:02:26.250272    3758 kubeadm.go:310] 
	I0816 10:02:26.250305    3758 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0816 10:02:26.250313    3758 kubeadm.go:310] 
	I0816 10:02:26.250359    3758 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0816 10:02:26.250425    3758 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0816 10:02:26.250484    3758 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0816 10:02:26.250491    3758 kubeadm.go:310] 
	I0816 10:02:26.250569    3758 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0816 10:02:26.250648    3758 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0816 10:02:26.250656    3758 kubeadm.go:310] 
	I0816 10:02:26.250716    3758 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 73merd.1elxantqs1p5mkiz \
	I0816 10:02:26.250793    3758 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:995590413ea712c4f7db2cf849d28264ebc291e6cd53d741ce1d22c1daaa1602 \
	I0816 10:02:26.250811    3758 kubeadm.go:310] 	--control-plane 
	I0816 10:02:26.250817    3758 kubeadm.go:310] 
	I0816 10:02:26.250879    3758 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0816 10:02:26.250885    3758 kubeadm.go:310] 
	I0816 10:02:26.250947    3758 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 73merd.1elxantqs1p5mkiz \
	I0816 10:02:26.251036    3758 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:995590413ea712c4f7db2cf849d28264ebc291e6cd53d741ce1d22c1daaa1602 
	I0816 10:02:26.251297    3758 kubeadm.go:310] W0816 17:02:15.599099    1566 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 10:02:26.251521    3758 kubeadm.go:310] W0816 17:02:15.600578    1566 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 10:02:26.251608    3758 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 10:02:26.251620    3758 cni.go:84] Creating CNI manager for ""
	I0816 10:02:26.251624    3758 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0816 10:02:26.289324    3758 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0816 10:02:26.363318    3758 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0816 10:02:26.368971    3758 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0816 10:02:26.368983    3758 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0816 10:02:26.382855    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0816 10:02:26.629869    3758 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 10:02:26.629947    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-286000 minikube.k8s.io/updated_at=2024_08_16T10_02_26_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd minikube.k8s.io/name=ha-286000 minikube.k8s.io/primary=true
	I0816 10:02:26.629949    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 10:02:26.658712    3758 ops.go:34] apiserver oom_adj: -16
	I0816 10:02:26.756402    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 10:02:27.257667    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 10:02:27.757259    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 10:02:28.256864    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 10:02:28.757602    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 10:02:29.256802    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 10:02:29.757282    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 10:02:29.882342    3758 kubeadm.go:1113] duration metric: took 3.252511045s to wait for elevateKubeSystemPrivileges
	I0816 10:02:29.882365    3758 kubeadm.go:394] duration metric: took 14.539506386s to StartCluster
	I0816 10:02:29.882380    3758 settings.go:142] acquiring lock: {Name:mk60e377244eac756871b74b6bd45115654652a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:29.882471    3758 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19461-1276/kubeconfig
	I0816 10:02:29.882922    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/kubeconfig: {Name:mk7f4491c8ec5cf96c96a5a1a5dbfba4301394a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:29.883167    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0816 10:02:29.883170    3758 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:02:29.883182    3758 start.go:241] waiting for startup goroutines ...
	I0816 10:02:29.883204    3758 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 10:02:29.883238    3758 addons.go:69] Setting storage-provisioner=true in profile "ha-286000"
	I0816 10:02:29.883244    3758 addons.go:69] Setting default-storageclass=true in profile "ha-286000"
	I0816 10:02:29.883261    3758 addons.go:234] Setting addon storage-provisioner=true in "ha-286000"
	I0816 10:02:29.883278    3758 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-286000"
	I0816 10:02:29.883280    3758 host.go:66] Checking if "ha-286000" exists ...
	I0816 10:02:29.883305    3758 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:02:29.883558    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:02:29.883573    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:02:29.883575    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:02:29.883598    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:02:29.892969    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51028
	I0816 10:02:29.893238    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51030
	I0816 10:02:29.893356    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:02:29.893605    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:02:29.893730    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:02:29.893741    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:02:29.893938    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:02:29.893951    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:02:29.893976    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:02:29.894142    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:02:29.894254    3758 main.go:141] libmachine: (ha-286000) Calling .GetState
	I0816 10:02:29.894338    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:29.894365    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:02:29.894397    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:02:29.894424    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:02:29.896690    3758 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19461-1276/kubeconfig
	I0816 10:02:29.896968    3758 kapi.go:59] client config for ha-286000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.key", CAFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xa202f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0816 10:02:29.897373    3758 cert_rotation.go:140] Starting client certificate rotation controller
	I0816 10:02:29.897530    3758 addons.go:234] Setting addon default-storageclass=true in "ha-286000"
	I0816 10:02:29.897550    3758 host.go:66] Checking if "ha-286000" exists ...
	I0816 10:02:29.897771    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:02:29.897789    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:02:29.903669    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51032
	I0816 10:02:29.904077    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:02:29.904431    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:02:29.904451    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:02:29.904736    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:02:29.904889    3758 main.go:141] libmachine: (ha-286000) Calling .GetState
	I0816 10:02:29.904993    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:29.905054    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:02:29.906086    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:29.906730    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51034
	I0816 10:02:29.907081    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:02:29.907463    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:02:29.907491    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:02:29.907694    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:02:29.908045    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:02:29.908061    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:02:29.917300    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51036
	I0816 10:02:29.917667    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:02:29.918020    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:02:29.918034    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:02:29.918277    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:02:29.918384    3758 main.go:141] libmachine: (ha-286000) Calling .GetState
	I0816 10:02:29.918483    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:29.918572    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:02:29.919534    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:29.919671    3758 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 10:02:29.919680    3758 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 10:02:29.919688    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:29.919789    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:29.919896    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:29.919990    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:29.920072    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:02:29.928735    3758 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 10:02:29.965711    3758 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 10:02:29.965725    3758 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 10:02:29.965744    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:29.965926    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:29.966043    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:29.966151    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:29.966280    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:02:30.033245    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.169.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0816 10:02:30.037035    3758 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 10:02:30.090040    3758 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 10:02:30.396267    3758 start.go:971] {"host.minikube.internal": 192.169.0.1} host record injected into CoreDNS's ConfigMap
	I0816 10:02:30.396311    3758 main.go:141] libmachine: Making call to close driver server
	I0816 10:02:30.396323    3758 main.go:141] libmachine: (ha-286000) Calling .Close
	I0816 10:02:30.396482    3758 main.go:141] libmachine: Successfully made call to close driver server
	I0816 10:02:30.396488    3758 main.go:141] libmachine: (ha-286000) DBG | Closing plugin on server side
	I0816 10:02:30.396492    3758 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 10:02:30.396501    3758 main.go:141] libmachine: Making call to close driver server
	I0816 10:02:30.396507    3758 main.go:141] libmachine: (ha-286000) Calling .Close
	I0816 10:02:30.396655    3758 main.go:141] libmachine: Successfully made call to close driver server
	I0816 10:02:30.396662    3758 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 10:02:30.396713    3758 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0816 10:02:30.396725    3758 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0816 10:02:30.396804    3758 round_trippers.go:463] GET https://192.169.0.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0816 10:02:30.396810    3758 round_trippers.go:469] Request Headers:
	I0816 10:02:30.396817    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:02:30.396822    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:02:30.402286    3758 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0816 10:02:30.402706    3758 round_trippers.go:463] PUT https://192.169.0.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0816 10:02:30.402713    3758 round_trippers.go:469] Request Headers:
	I0816 10:02:30.402718    3758 round_trippers.go:473]     Content-Type: application/json
	I0816 10:02:30.402721    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:02:30.402723    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:02:30.404290    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:02:30.404403    3758 main.go:141] libmachine: Making call to close driver server
	I0816 10:02:30.404416    3758 main.go:141] libmachine: (ha-286000) Calling .Close
	I0816 10:02:30.404565    3758 main.go:141] libmachine: Successfully made call to close driver server
	I0816 10:02:30.404575    3758 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 10:02:30.404586    3758 main.go:141] libmachine: (ha-286000) DBG | Closing plugin on server side
	I0816 10:02:30.497806    3758 main.go:141] libmachine: Making call to close driver server
	I0816 10:02:30.497818    3758 main.go:141] libmachine: (ha-286000) Calling .Close
	I0816 10:02:30.498006    3758 main.go:141] libmachine: Successfully made call to close driver server
	I0816 10:02:30.498011    3758 main.go:141] libmachine: (ha-286000) DBG | Closing plugin on server side
	I0816 10:02:30.498016    3758 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 10:02:30.498023    3758 main.go:141] libmachine: Making call to close driver server
	I0816 10:02:30.498040    3758 main.go:141] libmachine: (ha-286000) Calling .Close
	I0816 10:02:30.498160    3758 main.go:141] libmachine: Successfully made call to close driver server
	I0816 10:02:30.498168    3758 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 10:02:30.498179    3758 main.go:141] libmachine: (ha-286000) DBG | Closing plugin on server side
	I0816 10:02:30.538607    3758 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0816 10:02:30.596522    3758 addons.go:510] duration metric: took 713.336883ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0816 10:02:30.596578    3758 start.go:246] waiting for cluster config update ...
	I0816 10:02:30.596599    3758 start.go:255] writing updated cluster config ...
	I0816 10:02:30.634683    3758 out.go:201] 
	I0816 10:02:30.655754    3758 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:02:30.655861    3758 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:02:30.677634    3758 out.go:177] * Starting "ha-286000-m02" control-plane node in "ha-286000" cluster
	I0816 10:02:30.737443    3758 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 10:02:30.737475    3758 cache.go:56] Caching tarball of preloaded images
	I0816 10:02:30.737660    3758 preload.go:172] Found /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0816 10:02:30.737679    3758 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 10:02:30.737771    3758 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:02:30.738414    3758 start.go:360] acquireMachinesLock for ha-286000-m02: {Name:mk0510f0432b1e24fd07d899155e85dff8756d5a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:02:30.738494    3758 start.go:364] duration metric: took 63.585µs to acquireMachinesLock for "ha-286000-m02"
	I0816 10:02:30.738517    3758 start.go:93] Provisioning new machine with config: &{Name:ha-286000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:ha-286000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:02:30.738590    3758 start.go:125] createHost starting for "m02" (driver="hyperkit")
	I0816 10:02:30.760743    3758 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0816 10:02:30.760905    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:02:30.760935    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:02:30.771028    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51041
	I0816 10:02:30.771383    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:02:30.771745    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:02:30.771760    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:02:30.771967    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:02:30.772077    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetMachineName
	I0816 10:02:30.772157    3758 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:02:30.772261    3758 start.go:159] libmachine.API.Create for "ha-286000" (driver="hyperkit")
	I0816 10:02:30.772276    3758 client.go:168] LocalClient.Create starting
	I0816 10:02:30.772303    3758 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem
	I0816 10:02:30.772358    3758 main.go:141] libmachine: Decoding PEM data...
	I0816 10:02:30.772368    3758 main.go:141] libmachine: Parsing certificate...
	I0816 10:02:30.772413    3758 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem
	I0816 10:02:30.772450    3758 main.go:141] libmachine: Decoding PEM data...
	I0816 10:02:30.772460    3758 main.go:141] libmachine: Parsing certificate...
	I0816 10:02:30.772472    3758 main.go:141] libmachine: Running pre-create checks...
	I0816 10:02:30.772477    3758 main.go:141] libmachine: (ha-286000-m02) Calling .PreCreateCheck
	I0816 10:02:30.772545    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:30.772568    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetConfigRaw
	I0816 10:02:30.781772    3758 main.go:141] libmachine: Creating machine...
	I0816 10:02:30.781789    3758 main.go:141] libmachine: (ha-286000-m02) Calling .Create
	I0816 10:02:30.781943    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:30.782181    3758 main.go:141] libmachine: (ha-286000-m02) DBG | I0816 10:02:30.781934    3805 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19461-1276/.minikube
	I0816 10:02:30.782269    3758 main.go:141] libmachine: (ha-286000-m02) Downloading /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19461-1276/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0816 10:02:30.982082    3758 main.go:141] libmachine: (ha-286000-m02) DBG | I0816 10:02:30.982020    3805 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/id_rsa...
	I0816 10:02:31.266609    3758 main.go:141] libmachine: (ha-286000-m02) DBG | I0816 10:02:31.266514    3805 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/ha-286000-m02.rawdisk...
	I0816 10:02:31.266629    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Writing magic tar header
	I0816 10:02:31.266638    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Writing SSH key tar header
	I0816 10:02:31.267382    3758 main.go:141] libmachine: (ha-286000-m02) DBG | I0816 10:02:31.267242    3805 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02 ...
	I0816 10:02:31.642864    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:31.642889    3758 main.go:141] libmachine: (ha-286000-m02) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/hyperkit.pid
	I0816 10:02:31.642912    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Using UUID f7630301-2bc3-4935-a751-ac72999da031
	I0816 10:02:31.669349    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Generated MAC 72:69:8f:11:68:1d
	I0816 10:02:31.669369    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000
	I0816 10:02:31.669397    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"f7630301-2bc3-4935-a751-ac72999da031", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 10:02:31.669426    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"f7630301-2bc3-4935-a751-ac72999da031", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 10:02:31.669455    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "f7630301-2bc3-4935-a751-ac72999da031", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/ha-286000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/bzimage,/Users/jenkins/minikube-integration/19461-1276/.minikube/machine
s/ha-286000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000"}
	I0816 10:02:31.669484    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U f7630301-2bc3-4935-a751-ac72999da031 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/ha-286000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/bzimage,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000"
	I0816 10:02:31.669499    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0816 10:02:31.672492    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 DEBUG: hyperkit: Pid is 3806
	I0816 10:02:31.672957    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Attempt 0
	I0816 10:02:31.673000    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:31.673054    3758 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid from json: 3806
	I0816 10:02:31.674014    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Searching for 72:69:8f:11:68:1d in /var/db/dhcpd_leases ...
	I0816 10:02:31.674086    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0816 10:02:31.674098    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:02:31.674120    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:02:31.674129    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:02:31.674137    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:02:31.680025    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0816 10:02:31.688109    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0816 10:02:31.688968    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:02:31.688993    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:02:31.689006    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:02:31.689016    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:02:32.077133    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:32 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0816 10:02:32.077153    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:32 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0816 10:02:32.191789    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:02:32.191806    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:02:32.191814    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:02:32.191825    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:02:32.192676    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:32 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0816 10:02:32.192686    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:32 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0816 10:02:33.675165    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Attempt 1
	I0816 10:02:33.675183    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:33.675297    3758 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid from json: 3806
	I0816 10:02:33.676089    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Searching for 72:69:8f:11:68:1d in /var/db/dhcpd_leases ...
	I0816 10:02:33.676141    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0816 10:02:33.676153    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:02:33.676162    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:02:33.676169    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:02:33.676193    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:02:35.676266    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Attempt 2
	I0816 10:02:35.676285    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:35.676360    3758 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid from json: 3806
	I0816 10:02:35.677182    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Searching for 72:69:8f:11:68:1d in /var/db/dhcpd_leases ...
	I0816 10:02:35.677237    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0816 10:02:35.677248    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:02:35.677262    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:02:35.677270    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:02:35.677281    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:02:37.678515    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Attempt 3
	I0816 10:02:37.678532    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:37.678572    3758 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid from json: 3806
	I0816 10:02:37.679405    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Searching for 72:69:8f:11:68:1d in /var/db/dhcpd_leases ...
	I0816 10:02:37.679441    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0816 10:02:37.679451    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:02:37.679461    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:02:37.679468    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:02:37.679483    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:02:37.782496    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:37 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0816 10:02:37.782531    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:37 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0816 10:02:37.782540    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:37 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0816 10:02:37.806630    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:37 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0816 10:02:39.680161    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Attempt 4
	I0816 10:02:39.680178    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:39.680273    3758 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid from json: 3806
	I0816 10:02:39.681064    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Searching for 72:69:8f:11:68:1d in /var/db/dhcpd_leases ...
	I0816 10:02:39.681112    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0816 10:02:39.681136    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:02:39.681155    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:02:39.681170    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:02:39.681181    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:02:41.681380    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Attempt 5
	I0816 10:02:41.681414    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:41.681563    3758 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid from json: 3806
	I0816 10:02:41.682343    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Searching for 72:69:8f:11:68:1d in /var/db/dhcpd_leases ...
	I0816 10:02:41.682392    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0816 10:02:41.682406    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0d7b0}
	I0816 10:02:41.682415    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Found match: 72:69:8f:11:68:1d
	I0816 10:02:41.682421    3758 main.go:141] libmachine: (ha-286000-m02) DBG | IP: 192.169.0.6
	I0816 10:02:41.682475    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetConfigRaw
	I0816 10:02:41.683135    3758 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:02:41.683257    3758 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:02:41.683358    3758 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0816 10:02:41.683367    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetState
	I0816 10:02:41.683478    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:41.683537    3758 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid from json: 3806
	I0816 10:02:41.684414    3758 main.go:141] libmachine: Detecting operating system of created instance...
	I0816 10:02:41.684423    3758 main.go:141] libmachine: Waiting for SSH to be available...
	I0816 10:02:41.684427    3758 main.go:141] libmachine: Getting to WaitForSSH function...
	I0816 10:02:41.684431    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:41.684566    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:41.684692    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:41.684809    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:41.684943    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:41.685095    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:41.685300    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:02:41.685308    3758 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0816 10:02:42.746559    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 10:02:42.746571    3758 main.go:141] libmachine: Detecting the provisioner...
	I0816 10:02:42.746577    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:42.746714    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:42.746824    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:42.746914    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:42.746988    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:42.747112    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:42.747269    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:02:42.747277    3758 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0816 10:02:42.811630    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0816 10:02:42.811666    3758 main.go:141] libmachine: found compatible host: buildroot
	I0816 10:02:42.811672    3758 main.go:141] libmachine: Provisioning with buildroot...
	I0816 10:02:42.811681    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetMachineName
	I0816 10:02:42.811816    3758 buildroot.go:166] provisioning hostname "ha-286000-m02"
	I0816 10:02:42.811828    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetMachineName
	I0816 10:02:42.811943    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:42.812045    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:42.812136    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:42.812274    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:42.812376    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:42.812513    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:42.812673    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:02:42.812682    3758 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-286000-m02 && echo "ha-286000-m02" | sudo tee /etc/hostname
	I0816 10:02:42.887531    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-286000-m02
	
	I0816 10:02:42.887545    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:42.887676    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:42.887769    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:42.887867    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:42.887953    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:42.888075    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:42.888214    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:02:42.888225    3758 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-286000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-286000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-286000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 10:02:42.959625    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 10:02:42.959641    3758 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19461-1276/.minikube CaCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19461-1276/.minikube}
	I0816 10:02:42.959649    3758 buildroot.go:174] setting up certificates
	I0816 10:02:42.959661    3758 provision.go:84] configureAuth start
	I0816 10:02:42.959666    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetMachineName
	I0816 10:02:42.959803    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetIP
	I0816 10:02:42.959899    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:42.959980    3758 provision.go:143] copyHostCerts
	I0816 10:02:42.960007    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem
	I0816 10:02:42.960055    3758 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem, removing ...
	I0816 10:02:42.960061    3758 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem
	I0816 10:02:42.960954    3758 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem (1078 bytes)
	I0816 10:02:42.961160    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem
	I0816 10:02:42.961190    3758 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem, removing ...
	I0816 10:02:42.961195    3758 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem
	I0816 10:02:42.961273    3758 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem (1123 bytes)
	I0816 10:02:42.961439    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem
	I0816 10:02:42.961509    3758 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem, removing ...
	I0816 10:02:42.961515    3758 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem
	I0816 10:02:42.961594    3758 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem (1679 bytes)
	I0816 10:02:42.961746    3758 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem org=jenkins.ha-286000-m02 san=[127.0.0.1 192.169.0.6 ha-286000-m02 localhost minikube]
	I0816 10:02:43.093511    3758 provision.go:177] copyRemoteCerts
	I0816 10:02:43.093556    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 10:02:43.093572    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:43.093719    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:43.093818    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:43.093917    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:43.094005    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/id_rsa Username:docker}
	I0816 10:02:43.132229    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0816 10:02:43.132299    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0816 10:02:43.151814    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0816 10:02:43.151873    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0816 10:02:43.171464    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0816 10:02:43.171532    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 10:02:43.191045    3758 provision.go:87] duration metric: took 231.381757ms to configureAuth
	I0816 10:02:43.191057    3758 buildroot.go:189] setting minikube options for container-runtime
	I0816 10:02:43.191187    3758 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:02:43.191200    3758 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:02:43.191321    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:43.191410    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:43.191497    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:43.191580    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:43.191658    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:43.191759    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:43.191891    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:02:43.191898    3758 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0816 10:02:43.256733    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0816 10:02:43.256744    3758 buildroot.go:70] root file system type: tmpfs
	I0816 10:02:43.256823    3758 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0816 10:02:43.256835    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:43.256961    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:43.257048    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:43.257142    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:43.257220    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:43.257364    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:43.257505    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:02:43.257553    3758 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0816 10:02:43.330709    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0816 10:02:43.330729    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:43.330874    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:43.330977    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:43.331073    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:43.331167    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:43.331293    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:43.331440    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:02:43.331451    3758 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0816 10:02:44.884634    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0816 10:02:44.884648    3758 main.go:141] libmachine: Checking connection to Docker...
	I0816 10:02:44.884655    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetURL
	I0816 10:02:44.884793    3758 main.go:141] libmachine: Docker is up and running!
	I0816 10:02:44.884799    3758 main.go:141] libmachine: Reticulating splines...
	I0816 10:02:44.884804    3758 client.go:171] duration metric: took 14.112778669s to LocalClient.Create
	I0816 10:02:44.884820    3758 start.go:167] duration metric: took 14.112810851s to libmachine.API.Create "ha-286000"
	I0816 10:02:44.884826    3758 start.go:293] postStartSetup for "ha-286000-m02" (driver="hyperkit")
	I0816 10:02:44.884832    3758 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 10:02:44.884843    3758 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:02:44.884991    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 10:02:44.885004    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:44.885101    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:44.885204    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:44.885335    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:44.885428    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/id_rsa Username:docker}
	I0816 10:02:44.925701    3758 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 10:02:44.929599    3758 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 10:02:44.929613    3758 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19461-1276/.minikube/addons for local assets ...
	I0816 10:02:44.929722    3758 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19461-1276/.minikube/files for local assets ...
	I0816 10:02:44.929908    3758 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> 18312.pem in /etc/ssl/certs
	I0816 10:02:44.929914    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> /etc/ssl/certs/18312.pem
	I0816 10:02:44.930119    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 10:02:44.940484    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem --> /etc/ssl/certs/18312.pem (1708 bytes)
	I0816 10:02:44.973200    3758 start.go:296] duration metric: took 88.36731ms for postStartSetup
	I0816 10:02:44.973230    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetConfigRaw
	I0816 10:02:44.973848    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetIP
	I0816 10:02:44.973996    3758 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:02:44.974351    3758 start.go:128] duration metric: took 14.23599979s to createHost
	I0816 10:02:44.974366    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:44.974448    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:44.974568    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:44.974660    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:44.974744    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:44.974844    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:44.974966    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:02:44.974973    3758 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 10:02:45.036267    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723827764.932365297
	
	I0816 10:02:45.036285    3758 fix.go:216] guest clock: 1723827764.932365297
	I0816 10:02:45.036291    3758 fix.go:229] Guest: 2024-08-16 10:02:44.932365297 -0700 PDT Remote: 2024-08-16 10:02:44.97436 -0700 PDT m=+56.899083401 (delta=-41.994703ms)
	I0816 10:02:45.036301    3758 fix.go:200] guest clock delta is within tolerance: -41.994703ms
	I0816 10:02:45.036306    3758 start.go:83] releasing machines lock for "ha-286000-m02", held for 14.298063452s
	I0816 10:02:45.036338    3758 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:02:45.036468    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetIP
	I0816 10:02:45.062334    3758 out.go:177] * Found network options:
	I0816 10:02:45.105996    3758 out.go:177]   - NO_PROXY=192.169.0.5
	W0816 10:02:45.128174    3758 proxy.go:119] fail to check proxy env: Error ip not in block
	I0816 10:02:45.128216    3758 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:02:45.128829    3758 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:02:45.128969    3758 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:02:45.129060    3758 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 10:02:45.129088    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	W0816 10:02:45.129115    3758 proxy.go:119] fail to check proxy env: Error ip not in block
	I0816 10:02:45.129222    3758 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0816 10:02:45.129232    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:45.129238    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:45.129370    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:45.129393    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:45.129500    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:45.129515    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:45.129631    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/id_rsa Username:docker}
	I0816 10:02:45.129647    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:45.129727    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/id_rsa Username:docker}
	W0816 10:02:45.164265    3758 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 10:02:45.164324    3758 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 10:02:45.211468    3758 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 10:02:45.211489    3758 start.go:495] detecting cgroup driver to use...
	I0816 10:02:45.211595    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 10:02:45.228144    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0816 10:02:45.237310    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0816 10:02:45.246272    3758 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0816 10:02:45.246316    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0816 10:02:45.255987    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 10:02:45.265400    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0816 10:02:45.274661    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 10:02:45.283628    3758 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 10:02:45.292648    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0816 10:02:45.302207    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0816 10:02:45.311314    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0816 10:02:45.320414    3758 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 10:02:45.328532    3758 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 10:02:45.337229    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:45.444954    3758 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0816 10:02:45.464569    3758 start.go:495] detecting cgroup driver to use...
	I0816 10:02:45.464652    3758 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0816 10:02:45.488292    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 10:02:45.500162    3758 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 10:02:45.561835    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 10:02:45.572027    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 10:02:45.582122    3758 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0816 10:02:45.603011    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 10:02:45.613488    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 10:02:45.628434    3758 ssh_runner.go:195] Run: which cri-dockerd
	I0816 10:02:45.631358    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0816 10:02:45.638496    3758 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0816 10:02:45.652676    3758 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0816 10:02:45.755971    3758 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0816 10:02:45.860533    3758 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0816 10:02:45.860559    3758 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0816 10:02:45.874570    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:45.976095    3758 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0816 10:02:48.459358    3758 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.483288325s)
	I0816 10:02:48.459417    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0816 10:02:48.469882    3758 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0816 10:02:48.484429    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0816 10:02:48.495038    3758 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0816 10:02:48.597129    3758 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0816 10:02:48.691412    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:48.797592    3758 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0816 10:02:48.812075    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0816 10:02:48.823307    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:48.918652    3758 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0816 10:02:48.980191    3758 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0816 10:02:48.980257    3758 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0816 10:02:48.984954    3758 start.go:563] Will wait 60s for crictl version
	I0816 10:02:48.985011    3758 ssh_runner.go:195] Run: which crictl
	I0816 10:02:48.988185    3758 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 10:02:49.015262    3758 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0816 10:02:49.015344    3758 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0816 10:02:49.032341    3758 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0816 10:02:49.078128    3758 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0816 10:02:49.137645    3758 out.go:177]   - env NO_PROXY=192.169.0.5
	I0816 10:02:49.176661    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetIP
	I0816 10:02:49.177129    3758 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0816 10:02:49.181778    3758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 10:02:49.191522    3758 mustload.go:65] Loading cluster: ha-286000
	I0816 10:02:49.191665    3758 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:02:49.191885    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:02:49.191909    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:02:49.200721    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51064
	I0816 10:02:49.201069    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:02:49.201413    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:02:49.201424    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:02:49.201648    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:02:49.201776    3758 main.go:141] libmachine: (ha-286000) Calling .GetState
	I0816 10:02:49.201862    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:49.201960    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:02:49.202920    3758 host.go:66] Checking if "ha-286000" exists ...
	I0816 10:02:49.203188    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:02:49.203205    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:02:49.211926    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51066
	I0816 10:02:49.212258    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:02:49.212635    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:02:49.212652    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:02:49.212860    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:02:49.212967    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:49.213056    3758 certs.go:68] Setting up /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000 for IP: 192.169.0.6
	I0816 10:02:49.213061    3758 certs.go:194] generating shared ca certs ...
	I0816 10:02:49.213071    3758 certs.go:226] acquiring lock for ca certs: {Name:mka549fbc467849834d2267da204028c49d88a3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:49.213229    3758 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key
	I0816 10:02:49.213305    3758 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key
	I0816 10:02:49.213314    3758 certs.go:256] generating profile certs ...
	I0816 10:02:49.213406    3758 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.key
	I0816 10:02:49.213429    3758 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.5eb44438
	I0816 10:02:49.213443    3758 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.5eb44438 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.254]
	I0816 10:02:49.263058    3758 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.5eb44438 ...
	I0816 10:02:49.263077    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.5eb44438: {Name:mk266d5e842df442c104f85113e06d171d5a89ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:49.263395    3758 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.5eb44438 ...
	I0816 10:02:49.263404    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.5eb44438: {Name:mk1428912a4c1edaafe55f9bff302c19ad337785 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:49.263623    3758 certs.go:381] copying /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.5eb44438 -> /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt
	I0816 10:02:49.263846    3758 certs.go:385] copying /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.5eb44438 -> /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key
	I0816 10:02:49.264093    3758 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key
	I0816 10:02:49.264103    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0816 10:02:49.264125    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0816 10:02:49.264143    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0816 10:02:49.264163    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0816 10:02:49.264180    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0816 10:02:49.264199    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0816 10:02:49.264223    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0816 10:02:49.264242    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0816 10:02:49.264331    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem (1338 bytes)
	W0816 10:02:49.264378    3758 certs.go:480] ignoring /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831_empty.pem, impossibly tiny 0 bytes
	I0816 10:02:49.264386    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 10:02:49.264418    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem (1078 bytes)
	I0816 10:02:49.264449    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem (1123 bytes)
	I0816 10:02:49.264477    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem (1679 bytes)
	I0816 10:02:49.264545    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem (1708 bytes)
	I0816 10:02:49.264593    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> /usr/share/ca-certificates/18312.pem
	I0816 10:02:49.264614    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:02:49.264634    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem -> /usr/share/ca-certificates/1831.pem
	I0816 10:02:49.264663    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:49.264814    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:49.264897    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:49.265014    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:49.265108    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:02:49.296192    3758 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0816 10:02:49.299577    3758 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0816 10:02:49.312765    3758 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0816 10:02:49.316551    3758 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0816 10:02:49.332167    3758 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0816 10:02:49.336199    3758 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0816 10:02:49.349166    3758 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0816 10:02:49.354242    3758 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0816 10:02:49.364119    3758 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0816 10:02:49.367349    3758 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0816 10:02:49.375423    3758 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0816 10:02:49.378717    3758 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0816 10:02:49.386918    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 10:02:49.408036    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0816 10:02:49.428163    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 10:02:49.448952    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 10:02:49.468986    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0816 10:02:49.489674    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0816 10:02:49.509597    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 10:02:49.530175    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 10:02:49.550578    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem --> /usr/share/ca-certificates/18312.pem (1708 bytes)
	I0816 10:02:49.571266    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 10:02:49.590995    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem --> /usr/share/ca-certificates/1831.pem (1338 bytes)
	I0816 10:02:49.611766    3758 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0816 10:02:49.625222    3758 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0816 10:02:49.638852    3758 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0816 10:02:49.653497    3758 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0816 10:02:49.667098    3758 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0816 10:02:49.680935    3758 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0816 10:02:49.695437    3758 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0816 10:02:49.708684    3758 ssh_runner.go:195] Run: openssl version
	I0816 10:02:49.712979    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18312.pem && ln -fs /usr/share/ca-certificates/18312.pem /etc/ssl/certs/18312.pem"
	I0816 10:02:49.721565    3758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18312.pem
	I0816 10:02:49.725513    3758 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 16:57 /usr/share/ca-certificates/18312.pem
	I0816 10:02:49.725580    3758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18312.pem
	I0816 10:02:49.730364    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18312.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 10:02:49.739184    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 10:02:49.747494    3758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:02:49.750923    3758 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 16:48 /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:02:49.750955    3758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:02:49.755195    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 10:02:49.763703    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1831.pem && ln -fs /usr/share/ca-certificates/1831.pem /etc/ssl/certs/1831.pem"
	I0816 10:02:49.772914    3758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1831.pem
	I0816 10:02:49.776377    3758 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 16:57 /usr/share/ca-certificates/1831.pem
	I0816 10:02:49.776421    3758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1831.pem
	I0816 10:02:49.780731    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1831.pem /etc/ssl/certs/51391683.0"
	I0816 10:02:49.789078    3758 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 10:02:49.792242    3758 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0816 10:02:49.792277    3758 kubeadm.go:934] updating node {m02 192.169.0.6 8443 v1.31.0 docker true true} ...
	I0816 10:02:49.792332    3758 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-286000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-286000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 10:02:49.792349    3758 kube-vip.go:115] generating kube-vip config ...
	I0816 10:02:49.792381    3758 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0816 10:02:49.804469    3758 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0816 10:02:49.804511    3758 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0816 10:02:49.804567    3758 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 10:02:49.812921    3758 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0816 10:02:49.812983    3758 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0816 10:02:49.821031    3758 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256 -> /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubelet
	I0816 10:02:49.821035    3758 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256 -> /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubeadm
	I0816 10:02:49.821039    3758 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubectl
	I0816 10:02:51.911279    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0816 10:02:51.911374    3758 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0816 10:02:51.915115    3758 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0816 10:02:51.915150    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0816 10:02:52.537289    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0816 10:02:52.537383    3758 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0816 10:02:52.540898    3758 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0816 10:02:52.540929    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0816 10:02:53.267467    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 10:02:53.278589    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0816 10:02:53.278731    3758 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0816 10:02:53.282055    3758 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0816 10:02:53.282073    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0816 10:02:53.498714    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0816 10:02:53.506112    3758 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0816 10:02:53.519672    3758 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 10:02:53.533551    3758 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0816 10:02:53.548024    3758 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0816 10:02:53.551124    3758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 10:02:53.560470    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:53.659270    3758 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 10:02:53.674625    3758 host.go:66] Checking if "ha-286000" exists ...
	I0816 10:02:53.674900    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:02:53.674923    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:02:53.683891    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51093
	I0816 10:02:53.684256    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:02:53.684641    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:02:53.684658    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:02:53.684885    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:02:53.685003    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:53.685084    3758 start.go:317] joinCluster: &{Name:ha-286000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clu
sterName:ha-286000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpira
tion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 10:02:53.685155    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0816 10:02:53.685167    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:53.685248    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:53.685339    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:53.685423    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:53.685503    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:02:53.765911    3758 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:02:53.765972    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token mpmdnf.6rzc0wpb01e0t6lz --discovery-token-ca-cert-hash sha256:995590413ea712c4f7db2cf849d28264ebc291e6cd53d741ce1d22c1daaa1602 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-286000-m02 --control-plane --apiserver-advertise-address=192.169.0.6 --apiserver-bind-port=8443"
	I0816 10:03:22.070391    3758 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token mpmdnf.6rzc0wpb01e0t6lz --discovery-token-ca-cert-hash sha256:995590413ea712c4f7db2cf849d28264ebc291e6cd53d741ce1d22c1daaa1602 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-286000-m02 --control-plane --apiserver-advertise-address=192.169.0.6 --apiserver-bind-port=8443": (28.304928658s)
	I0816 10:03:22.070414    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0816 10:03:22.427732    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-286000-m02 minikube.k8s.io/updated_at=2024_08_16T10_03_22_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd minikube.k8s.io/name=ha-286000 minikube.k8s.io/primary=false
	I0816 10:03:22.531211    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-286000-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0816 10:03:22.629050    3758 start.go:319] duration metric: took 28.944509117s to joinCluster
	I0816 10:03:22.629105    3758 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:03:22.629360    3758 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:03:22.652598    3758 out.go:177] * Verifying Kubernetes components...
	I0816 10:03:22.726472    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:03:22.952731    3758 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 10:03:22.975401    3758 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19461-1276/kubeconfig
	I0816 10:03:22.975621    3758 kapi.go:59] client config for ha-286000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.key", CAFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xa202f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0816 10:03:22.975663    3758 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0816 10:03:22.975836    3758 node_ready.go:35] waiting up to 6m0s for node "ha-286000-m02" to be "Ready" ...
	I0816 10:03:22.975886    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:22.975891    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:22.975897    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:22.975901    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:22.983180    3758 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0816 10:03:23.476314    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:23.476365    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:23.476380    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:23.476394    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:23.502874    3758 round_trippers.go:574] Response Status: 200 OK in 26 milliseconds
	I0816 10:03:23.977290    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:23.977304    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:23.977311    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:23.977315    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:23.979495    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:24.477835    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:24.477851    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:24.477858    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:24.477861    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:24.480701    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:24.977514    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:24.977568    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:24.977580    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:24.977588    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:24.980856    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:24.981299    3758 node_ready.go:53] node "ha-286000-m02" has status "Ready":"False"
	I0816 10:03:25.476235    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:25.476252    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:25.476260    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:25.476277    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:25.478567    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:25.976418    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:25.976469    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:25.976477    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:25.976480    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:25.978730    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:26.476423    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:26.476439    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:26.476445    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:26.476448    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:26.478424    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:26.975919    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:26.975934    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:26.975940    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:26.975945    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:26.977809    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:27.475966    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:27.475981    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:27.475987    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:27.475990    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:27.477769    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:27.478121    3758 node_ready.go:53] node "ha-286000-m02" has status "Ready":"False"
	I0816 10:03:27.977123    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:27.977139    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:27.977146    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:27.977150    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:27.979266    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:28.477140    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:28.477226    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:28.477254    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:28.477264    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:28.480386    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:28.977368    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:28.977407    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:28.977420    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:28.977427    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:28.980479    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:29.477521    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:29.477545    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:29.477556    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:29.477565    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:29.481179    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:29.481510    3758 node_ready.go:53] node "ha-286000-m02" has status "Ready":"False"
	I0816 10:03:29.975906    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:29.975932    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:29.975943    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:29.975949    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:29.978633    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:30.477171    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:30.477189    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:30.477198    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:30.477201    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:30.480005    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:30.977947    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:30.977973    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:30.977993    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:30.977998    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:30.982219    3758 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 10:03:31.476252    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:31.476327    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:31.476341    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:31.476347    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:31.479417    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:31.975987    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:31.976012    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:31.976023    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:31.976029    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:31.979409    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:31.979880    3758 node_ready.go:53] node "ha-286000-m02" has status "Ready":"False"
	I0816 10:03:32.477180    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:32.477207    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:32.477229    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:32.477240    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:32.480686    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:32.976225    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:32.976250    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:32.976261    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:32.976269    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:32.979533    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:33.475956    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:33.475980    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:33.475991    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:33.475997    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:33.479025    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:33.977111    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:33.977185    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:33.977198    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:33.977204    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:33.980426    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:33.980922    3758 node_ready.go:53] node "ha-286000-m02" has status "Ready":"False"
	I0816 10:03:34.476455    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:34.476470    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:34.476477    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:34.476481    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:34.478517    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:34.976996    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:34.977037    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:34.977049    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:34.977084    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:34.980500    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:35.476045    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:35.476068    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:35.476079    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:35.476086    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:35.479058    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:35.975920    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:35.975944    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:35.975956    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:35.975962    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:35.979071    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:36.477845    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:36.477872    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:36.477885    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:36.477946    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:36.481401    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:36.481890    3758 node_ready.go:53] node "ha-286000-m02" has status "Ready":"False"
	I0816 10:03:36.977033    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:36.977057    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:36.977069    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:36.977076    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:36.980476    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:37.477095    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:37.477120    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:37.477133    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:37.477141    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:37.480485    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:37.976303    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:37.976320    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:37.976343    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:37.976348    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:37.978551    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:38.476492    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:38.476517    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.476528    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.476536    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:38.480190    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:38.976091    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:38.976114    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.976124    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.976129    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:38.979741    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:38.980051    3758 node_ready.go:49] node "ha-286000-m02" has status "Ready":"True"
	I0816 10:03:38.980066    3758 node_ready.go:38] duration metric: took 16.004516347s for node "ha-286000-m02" to be "Ready" ...
	I0816 10:03:38.980077    3758 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 10:03:38.980127    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0816 10:03:38.980135    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.980142    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.980152    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:38.982937    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:38.987546    3758 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-2kqjf" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:38.987591    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-2kqjf
	I0816 10:03:38.987597    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.987603    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.987607    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:38.989339    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:38.989748    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:38.989758    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.989764    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.989768    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:38.991324    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:38.991660    3758 pod_ready.go:93] pod "coredns-6f6b679f8f-2kqjf" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:38.991671    3758 pod_ready.go:82] duration metric: took 4.111381ms for pod "coredns-6f6b679f8f-2kqjf" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:38.991678    3758 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-rfbz7" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:38.991708    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-rfbz7
	I0816 10:03:38.991713    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.991718    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.991721    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:38.993245    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:38.993624    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:38.993631    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.993640    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.993644    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:38.995095    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:38.995419    3758 pod_ready.go:93] pod "coredns-6f6b679f8f-rfbz7" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:38.995427    3758 pod_ready.go:82] duration metric: took 3.744646ms for pod "coredns-6f6b679f8f-rfbz7" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:38.995433    3758 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:38.995467    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-286000
	I0816 10:03:38.995472    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.995478    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.995482    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:38.997007    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:38.997390    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:38.997397    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.997403    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.997406    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:38.998839    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:38.999151    3758 pod_ready.go:93] pod "etcd-ha-286000" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:38.999160    3758 pod_ready.go:82] duration metric: took 3.721879ms for pod "etcd-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:38.999165    3758 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:38.999202    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-286000-m02
	I0816 10:03:38.999207    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.999213    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.999217    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:39.001189    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:39.001547    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:39.001554    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:39.001559    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:39.001563    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:39.003004    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:39.003290    3758 pod_ready.go:93] pod "etcd-ha-286000-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:39.003298    3758 pod_ready.go:82] duration metric: took 4.127582ms for pod "etcd-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:39.003307    3758 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:39.177834    3758 request.go:632] Waited for 174.443582ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-286000
	I0816 10:03:39.177948    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-286000
	I0816 10:03:39.177963    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:39.177974    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:39.177981    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:39.181371    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:39.376438    3758 request.go:632] Waited for 194.538822ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:39.376562    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:39.376574    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:39.376586    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:39.376596    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:39.379989    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:39.380425    3758 pod_ready.go:93] pod "kube-apiserver-ha-286000" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:39.380437    3758 pod_ready.go:82] duration metric: took 377.131323ms for pod "kube-apiserver-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:39.380446    3758 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:39.576633    3758 request.go:632] Waited for 196.095353ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-286000-m02
	I0816 10:03:39.576687    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-286000-m02
	I0816 10:03:39.576696    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:39.576769    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:39.576793    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:39.580164    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:39.776642    3758 request.go:632] Waited for 195.66937ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:39.776725    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:39.776735    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:39.776746    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:39.776752    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:39.780273    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:39.780714    3758 pod_ready.go:93] pod "kube-apiserver-ha-286000-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:39.780723    3758 pod_ready.go:82] duration metric: took 400.274102ms for pod "kube-apiserver-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:39.780730    3758 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:39.977598    3758 request.go:632] Waited for 196.714211ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-286000
	I0816 10:03:39.977650    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-286000
	I0816 10:03:39.977659    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:39.977670    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:39.977679    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:39.981079    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:40.177460    3758 request.go:632] Waited for 195.94045ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:40.177528    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:40.177589    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:40.177603    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:40.177609    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:40.180971    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:40.181992    3758 pod_ready.go:93] pod "kube-controller-manager-ha-286000" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:40.182036    3758 pod_ready.go:82] duration metric: took 401.277819ms for pod "kube-controller-manager-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:40.182047    3758 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:40.376258    3758 request.go:632] Waited for 194.111468ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-286000-m02
	I0816 10:03:40.376327    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-286000-m02
	I0816 10:03:40.376336    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:40.376347    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:40.376355    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:40.379476    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:40.576506    3758 request.go:632] Waited for 196.409077ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:40.576552    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:40.576560    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:40.576571    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:40.576580    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:40.579964    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:40.580641    3758 pod_ready.go:93] pod "kube-controller-manager-ha-286000-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:40.580654    3758 pod_ready.go:82] duration metric: took 398.607786ms for pod "kube-controller-manager-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:40.580663    3758 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pt669" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:40.778194    3758 request.go:632] Waited for 197.469682ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pt669
	I0816 10:03:40.778324    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pt669
	I0816 10:03:40.778333    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:40.778345    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:40.778352    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:40.781925    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:40.976623    3758 request.go:632] Waited for 194.04689ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:40.976752    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:40.976761    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:40.976772    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:40.976780    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:40.980022    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:40.980473    3758 pod_ready.go:93] pod "kube-proxy-pt669" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:40.980485    3758 pod_ready.go:82] duration metric: took 399.822578ms for pod "kube-proxy-pt669" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:40.980494    3758 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-w4nt2" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:41.177361    3758 request.go:632] Waited for 196.811255ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w4nt2
	I0816 10:03:41.177533    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w4nt2
	I0816 10:03:41.177545    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:41.177556    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:41.177563    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:41.181543    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:41.377692    3758 request.go:632] Waited for 195.583238ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:41.377791    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:41.377802    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:41.377812    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:41.377819    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:41.381134    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:41.381716    3758 pod_ready.go:93] pod "kube-proxy-w4nt2" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:41.381726    3758 pod_ready.go:82] duration metric: took 401.234529ms for pod "kube-proxy-w4nt2" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:41.381733    3758 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:41.577020    3758 request.go:632] Waited for 195.246357ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-286000
	I0816 10:03:41.577073    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-286000
	I0816 10:03:41.577125    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:41.577138    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:41.577147    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:41.580051    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:41.777879    3758 request.go:632] Waited for 197.366145ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:41.777974    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:41.777983    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:41.777995    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:41.778003    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:41.781434    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:41.782012    3758 pod_ready.go:93] pod "kube-scheduler-ha-286000" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:41.782023    3758 pod_ready.go:82] duration metric: took 400.292624ms for pod "kube-scheduler-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:41.782051    3758 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:41.976356    3758 request.go:632] Waited for 194.23341ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-286000-m02
	I0816 10:03:41.976463    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-286000-m02
	I0816 10:03:41.976473    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:41.976485    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:41.976494    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:41.980088    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:42.176952    3758 request.go:632] Waited for 196.161737ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:42.177009    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:42.177018    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:42.177026    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:42.177083    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:42.182888    3758 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0816 10:03:42.183179    3758 pod_ready.go:93] pod "kube-scheduler-ha-286000-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:42.183188    3758 pod_ready.go:82] duration metric: took 401.133714ms for pod "kube-scheduler-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:42.183203    3758 pod_ready.go:39] duration metric: took 3.203173842s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 10:03:42.183221    3758 api_server.go:52] waiting for apiserver process to appear ...
	I0816 10:03:42.183274    3758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 10:03:42.195671    3758 api_server.go:72] duration metric: took 19.566910589s to wait for apiserver process to appear ...
	I0816 10:03:42.195682    3758 api_server.go:88] waiting for apiserver healthz status ...
	I0816 10:03:42.195698    3758 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0816 10:03:42.198837    3758 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0816 10:03:42.198870    3758 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0816 10:03:42.198874    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:42.198880    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:42.198884    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:42.199389    3758 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0816 10:03:42.199468    3758 api_server.go:141] control plane version: v1.31.0
	I0816 10:03:42.199478    3758 api_server.go:131] duration metric: took 3.791893ms to wait for apiserver health ...
	I0816 10:03:42.199486    3758 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 10:03:42.378048    3758 request.go:632] Waited for 178.500938ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0816 10:03:42.378156    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0816 10:03:42.378166    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:42.378178    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:42.378186    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:42.382776    3758 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 10:03:42.386068    3758 system_pods.go:59] 17 kube-system pods found
	I0816 10:03:42.386083    3758 system_pods.go:61] "coredns-6f6b679f8f-2kqjf" [be18cd09-3e3b-4749-bf29-7001a879f593] Running
	I0816 10:03:42.386087    3758 system_pods.go:61] "coredns-6f6b679f8f-rfbz7" [a593e27a-e38b-46c7-a603-44963c31c095] Running
	I0816 10:03:42.386090    3758 system_pods.go:61] "etcd-ha-286000" [c45bc623-befe-4e88-8da1-bb4f7d7be5b2] Running
	I0816 10:03:42.386093    3758 system_pods.go:61] "etcd-ha-286000-m02" [e14fbe0c-4527-4cbb-8c13-01c0b32a8d15] Running
	I0816 10:03:42.386096    3758 system_pods.go:61] "kindnet-t9kjf" [20c57f28-5e86-44a4-8a2a-07e1e240abf2] Running
	I0816 10:03:42.386099    3758 system_pods.go:61] "kindnet-whqxb" [1cc5291b-52ea-44b5-b87c-607d46b5281a] Running
	I0816 10:03:42.386102    3758 system_pods.go:61] "kube-apiserver-ha-286000" [c0d298a9-931b-4084-9e5f-01a88de27456] Running
	I0816 10:03:42.386104    3758 system_pods.go:61] "kube-apiserver-ha-286000-m02" [3666c131-2b88-483a-ab8c-b18cb8db7d6f] Running
	I0816 10:03:42.386120    3758 system_pods.go:61] "kube-controller-manager-ha-286000" [5a4f4c5d-d89a-4e5f-b022-479ca31f64cd] Running
	I0816 10:03:42.386127    3758 system_pods.go:61] "kube-controller-manager-ha-286000-m02" [a347c3d4-fa47-49ea-93a0-83087017a2bd] Running
	I0816 10:03:42.386130    3758 system_pods.go:61] "kube-proxy-pt669" [798c956f-8f08-465a-b0d9-90b65f703f80] Running
	I0816 10:03:42.386139    3758 system_pods.go:61] "kube-proxy-w4nt2" [79ce2248-f8fd-4a3b-aeb7-f81d7ad64564] Running
	I0816 10:03:42.386143    3758 system_pods.go:61] "kube-scheduler-ha-286000" [b4af80e0-cbb0-4257-a212-3585faff0875] Running
	I0816 10:03:42.386145    3758 system_pods.go:61] "kube-scheduler-ha-286000-m02" [89920e93-c959-47ec-8a2c-f2b63e8c3d3a] Running
	I0816 10:03:42.386153    3758 system_pods.go:61] "kube-vip-ha-286000" [b0f85be2-2afb-44b0-a701-161b24529349] Running
	I0816 10:03:42.386156    3758 system_pods.go:61] "kube-vip-ha-286000-m02" [4982dc53-946d-4f75-8e9e-452ef39ff2fa] Running
	I0816 10:03:42.386159    3758 system_pods.go:61] "storage-provisioner" [4805d53b-2db3-4092-a3f2-d4a854e93adc] Running
	I0816 10:03:42.386163    3758 system_pods.go:74] duration metric: took 186.675963ms to wait for pod list to return data ...
	I0816 10:03:42.386170    3758 default_sa.go:34] waiting for default service account to be created ...
	I0816 10:03:42.577030    3758 request.go:632] Waited for 190.795237ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0816 10:03:42.577183    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0816 10:03:42.577198    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:42.577209    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:42.577215    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:42.581020    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:42.581204    3758 default_sa.go:45] found service account: "default"
	I0816 10:03:42.581216    3758 default_sa.go:55] duration metric: took 195.045213ms for default service account to be created ...
	I0816 10:03:42.581224    3758 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 10:03:42.777160    3758 request.go:632] Waited for 195.848894ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0816 10:03:42.777252    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0816 10:03:42.777268    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:42.777285    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:42.777303    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:42.781637    3758 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 10:03:42.784962    3758 system_pods.go:86] 17 kube-system pods found
	I0816 10:03:42.784974    3758 system_pods.go:89] "coredns-6f6b679f8f-2kqjf" [be18cd09-3e3b-4749-bf29-7001a879f593] Running
	I0816 10:03:42.784978    3758 system_pods.go:89] "coredns-6f6b679f8f-rfbz7" [a593e27a-e38b-46c7-a603-44963c31c095] Running
	I0816 10:03:42.784981    3758 system_pods.go:89] "etcd-ha-286000" [c45bc623-befe-4e88-8da1-bb4f7d7be5b2] Running
	I0816 10:03:42.784984    3758 system_pods.go:89] "etcd-ha-286000-m02" [e14fbe0c-4527-4cbb-8c13-01c0b32a8d15] Running
	I0816 10:03:42.784987    3758 system_pods.go:89] "kindnet-t9kjf" [20c57f28-5e86-44a4-8a2a-07e1e240abf2] Running
	I0816 10:03:42.784990    3758 system_pods.go:89] "kindnet-whqxb" [1cc5291b-52ea-44b5-b87c-607d46b5281a] Running
	I0816 10:03:42.784992    3758 system_pods.go:89] "kube-apiserver-ha-286000" [c0d298a9-931b-4084-9e5f-01a88de27456] Running
	I0816 10:03:42.784995    3758 system_pods.go:89] "kube-apiserver-ha-286000-m02" [3666c131-2b88-483a-ab8c-b18cb8db7d6f] Running
	I0816 10:03:42.784997    3758 system_pods.go:89] "kube-controller-manager-ha-286000" [5a4f4c5d-d89a-4e5f-b022-479ca31f64cd] Running
	I0816 10:03:42.785001    3758 system_pods.go:89] "kube-controller-manager-ha-286000-m02" [a347c3d4-fa47-49ea-93a0-83087017a2bd] Running
	I0816 10:03:42.785003    3758 system_pods.go:89] "kube-proxy-pt669" [798c956f-8f08-465a-b0d9-90b65f703f80] Running
	I0816 10:03:42.785006    3758 system_pods.go:89] "kube-proxy-w4nt2" [79ce2248-f8fd-4a3b-aeb7-f81d7ad64564] Running
	I0816 10:03:42.785009    3758 system_pods.go:89] "kube-scheduler-ha-286000" [b4af80e0-cbb0-4257-a212-3585faff0875] Running
	I0816 10:03:42.785011    3758 system_pods.go:89] "kube-scheduler-ha-286000-m02" [89920e93-c959-47ec-8a2c-f2b63e8c3d3a] Running
	I0816 10:03:42.785015    3758 system_pods.go:89] "kube-vip-ha-286000" [b0f85be2-2afb-44b0-a701-161b24529349] Running
	I0816 10:03:42.785017    3758 system_pods.go:89] "kube-vip-ha-286000-m02" [4982dc53-946d-4f75-8e9e-452ef39ff2fa] Running
	I0816 10:03:42.785023    3758 system_pods.go:89] "storage-provisioner" [4805d53b-2db3-4092-a3f2-d4a854e93adc] Running
	I0816 10:03:42.785028    3758 system_pods.go:126] duration metric: took 203.80313ms to wait for k8s-apps to be running ...
	I0816 10:03:42.785034    3758 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 10:03:42.785088    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 10:03:42.796381    3758 system_svc.go:56] duration metric: took 11.343439ms WaitForService to wait for kubelet
	I0816 10:03:42.796397    3758 kubeadm.go:582] duration metric: took 20.16764951s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 10:03:42.796409    3758 node_conditions.go:102] verifying NodePressure condition ...
	I0816 10:03:42.977594    3758 request.go:632] Waited for 181.143005ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0816 10:03:42.977695    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0816 10:03:42.977706    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:42.977719    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:42.977724    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:42.981373    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:42.981988    3758 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 10:03:42.982013    3758 node_conditions.go:123] node cpu capacity is 2
	I0816 10:03:42.982039    3758 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 10:03:42.982043    3758 node_conditions.go:123] node cpu capacity is 2
	I0816 10:03:42.982046    3758 node_conditions.go:105] duration metric: took 185.637747ms to run NodePressure ...
	I0816 10:03:42.982055    3758 start.go:241] waiting for startup goroutines ...
	I0816 10:03:42.982079    3758 start.go:255] writing updated cluster config ...
	I0816 10:03:43.003740    3758 out.go:201] 
	I0816 10:03:43.024885    3758 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:03:43.024976    3758 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:03:43.047776    3758 out.go:177] * Starting "ha-286000-m03" control-plane node in "ha-286000" cluster
	I0816 10:03:43.137621    3758 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 10:03:43.137655    3758 cache.go:56] Caching tarball of preloaded images
	I0816 10:03:43.137861    3758 preload.go:172] Found /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0816 10:03:43.137884    3758 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 10:03:43.138022    3758 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:03:43.159475    3758 start.go:360] acquireMachinesLock for ha-286000-m03: {Name:mk0510f0432b1e24fd07d899155e85dff8756d5a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:03:43.159592    3758 start.go:364] duration metric: took 91.442µs to acquireMachinesLock for "ha-286000-m03"
	I0816 10:03:43.159625    3758 start.go:93] Provisioning new machine with config: &{Name:ha-286000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:ha-286000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ing
ress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:03:43.159736    3758 start.go:125] createHost starting for "m03" (driver="hyperkit")
	I0816 10:03:43.180569    3758 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0816 10:03:43.180719    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:03:43.180762    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:03:43.191087    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51098
	I0816 10:03:43.191558    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:03:43.191889    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:03:43.191899    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:03:43.192106    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:03:43.192302    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetMachineName
	I0816 10:03:43.192394    3758 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:03:43.192506    3758 start.go:159] libmachine.API.Create for "ha-286000" (driver="hyperkit")
	I0816 10:03:43.192526    3758 client.go:168] LocalClient.Create starting
	I0816 10:03:43.192559    3758 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem
	I0816 10:03:43.192606    3758 main.go:141] libmachine: Decoding PEM data...
	I0816 10:03:43.192620    3758 main.go:141] libmachine: Parsing certificate...
	I0816 10:03:43.192675    3758 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem
	I0816 10:03:43.192704    3758 main.go:141] libmachine: Decoding PEM data...
	I0816 10:03:43.192714    3758 main.go:141] libmachine: Parsing certificate...
	I0816 10:03:43.192726    3758 main.go:141] libmachine: Running pre-create checks...
	I0816 10:03:43.192731    3758 main.go:141] libmachine: (ha-286000-m03) Calling .PreCreateCheck
	I0816 10:03:43.192813    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:43.192833    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetConfigRaw
	I0816 10:03:43.202038    3758 main.go:141] libmachine: Creating machine...
	I0816 10:03:43.202062    3758 main.go:141] libmachine: (ha-286000-m03) Calling .Create
	I0816 10:03:43.202302    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:43.202574    3758 main.go:141] libmachine: (ha-286000-m03) DBG | I0816 10:03:43.202284    3848 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19461-1276/.minikube
	I0816 10:03:43.202705    3758 main.go:141] libmachine: (ha-286000-m03) Downloading /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19461-1276/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0816 10:03:43.529965    3758 main.go:141] libmachine: (ha-286000-m03) DBG | I0816 10:03:43.529902    3848 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/id_rsa...
	I0816 10:03:43.631057    3758 main.go:141] libmachine: (ha-286000-m03) DBG | I0816 10:03:43.630965    3848 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/ha-286000-m03.rawdisk...
	I0816 10:03:43.631074    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Writing magic tar header
	I0816 10:03:43.631083    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Writing SSH key tar header
	I0816 10:03:43.631711    3758 main.go:141] libmachine: (ha-286000-m03) DBG | I0816 10:03:43.631681    3848 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03 ...
	I0816 10:03:44.155270    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:44.155286    3758 main.go:141] libmachine: (ha-286000-m03) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/hyperkit.pid
	I0816 10:03:44.155318    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Using UUID 408efb18-ff91-428f-b414-6eb0d982a779
	I0816 10:03:44.181981    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Generated MAC 8a:e:de:5b:b5:8b
	I0816 10:03:44.182010    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000
	I0816 10:03:44.182059    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"408efb18-ff91-428f-b414-6eb0d982a779", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 10:03:44.182101    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"408efb18-ff91-428f-b414-6eb0d982a779", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 10:03:44.182228    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "408efb18-ff91-428f-b414-6eb0d982a779", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/ha-286000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/bzimage,/Users/jenkins/minikube-integration/19461-1276/.minikube/machine
s/ha-286000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000"}
	I0816 10:03:44.182306    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 408efb18-ff91-428f-b414-6eb0d982a779 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/ha-286000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/bzimage,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000"
	I0816 10:03:44.182332    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0816 10:03:44.185479    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 DEBUG: hyperkit: Pid is 3849
	I0816 10:03:44.185900    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Attempt 0
	I0816 10:03:44.185912    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:44.186025    3758 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid from json: 3849
	I0816 10:03:44.186994    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Searching for 8a:e:de:5b:b5:8b in /var/db/dhcpd_leases ...
	I0816 10:03:44.187062    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0816 10:03:44.187079    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0d7b0}
	I0816 10:03:44.187114    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:03:44.187142    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:03:44.187168    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:03:44.187185    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:03:44.193352    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0816 10:03:44.201781    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0816 10:03:44.202595    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:03:44.202610    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:03:44.202637    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:03:44.202653    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:03:44.588441    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0816 10:03:44.588458    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0816 10:03:44.703100    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:03:44.703117    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:03:44.703125    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:03:44.703135    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:03:44.703952    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0816 10:03:44.703963    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0816 10:03:46.188345    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Attempt 1
	I0816 10:03:46.188360    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:46.188480    3758 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid from json: 3849
	I0816 10:03:46.189255    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Searching for 8a:e:de:5b:b5:8b in /var/db/dhcpd_leases ...
	I0816 10:03:46.189306    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0816 10:03:46.189321    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0d7b0}
	I0816 10:03:46.189335    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:03:46.189352    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:03:46.189359    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:03:46.189385    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:03:48.190823    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Attempt 2
	I0816 10:03:48.190838    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:48.190916    3758 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid from json: 3849
	I0816 10:03:48.191692    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Searching for 8a:e:de:5b:b5:8b in /var/db/dhcpd_leases ...
	I0816 10:03:48.191747    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0816 10:03:48.191757    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0d7b0}
	I0816 10:03:48.191779    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:03:48.191787    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:03:48.191803    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:03:48.191815    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:03:50.191897    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Attempt 3
	I0816 10:03:50.191913    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:50.191977    3758 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid from json: 3849
	I0816 10:03:50.192744    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Searching for 8a:e:de:5b:b5:8b in /var/db/dhcpd_leases ...
	I0816 10:03:50.192793    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0816 10:03:50.192802    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0d7b0}
	I0816 10:03:50.192811    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:03:50.192820    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:03:50.192836    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:03:50.192850    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:03:50.310705    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:50 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0816 10:03:50.310770    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:50 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0816 10:03:50.310783    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:50 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0816 10:03:50.334054    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:50 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0816 10:03:52.193443    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Attempt 4
	I0816 10:03:52.193459    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:52.193535    3758 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid from json: 3849
	I0816 10:03:52.194319    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Searching for 8a:e:de:5b:b5:8b in /var/db/dhcpd_leases ...
	I0816 10:03:52.194367    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0816 10:03:52.194379    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0d7b0}
	I0816 10:03:52.194390    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:03:52.194399    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:03:52.194408    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:03:52.194419    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:03:54.195434    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Attempt 5
	I0816 10:03:54.195456    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:54.195540    3758 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid from json: 3849
	I0816 10:03:54.196406    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Searching for 8a:e:de:5b:b5:8b in /var/db/dhcpd_leases ...
	I0816 10:03:54.196510    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Found 6 entries in /var/db/dhcpd_leases!
	I0816 10:03:54.196530    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66c0d7f9}
	I0816 10:03:54.196551    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Found match: 8a:e:de:5b:b5:8b
	I0816 10:03:54.196553    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetConfigRaw
	I0816 10:03:54.196565    3758 main.go:141] libmachine: (ha-286000-m03) DBG | IP: 192.169.0.7
	I0816 10:03:54.197236    3758 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:03:54.197351    3758 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:03:54.197441    3758 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0816 10:03:54.197454    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetState
	I0816 10:03:54.197541    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:54.197611    3758 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid from json: 3849
	I0816 10:03:54.198439    3758 main.go:141] libmachine: Detecting operating system of created instance...
	I0816 10:03:54.198446    3758 main.go:141] libmachine: Waiting for SSH to be available...
	I0816 10:03:54.198450    3758 main.go:141] libmachine: Getting to WaitForSSH function...
	I0816 10:03:54.198454    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:54.198532    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:54.198612    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:54.198695    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:54.198783    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:54.198892    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:03:54.199063    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:03:54.199071    3758 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0816 10:03:55.266165    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 10:03:55.266179    3758 main.go:141] libmachine: Detecting the provisioner...
	I0816 10:03:55.266185    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:55.266318    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:55.266413    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.266507    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.266601    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:55.266731    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:03:55.266879    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:03:55.266886    3758 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0816 10:03:55.332778    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0816 10:03:55.332822    3758 main.go:141] libmachine: found compatible host: buildroot
	I0816 10:03:55.332828    3758 main.go:141] libmachine: Provisioning with buildroot...
	I0816 10:03:55.332834    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetMachineName
	I0816 10:03:55.332971    3758 buildroot.go:166] provisioning hostname "ha-286000-m03"
	I0816 10:03:55.332982    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetMachineName
	I0816 10:03:55.333079    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:55.333186    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:55.333277    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.333375    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.333463    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:55.333593    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:03:55.333737    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:03:55.333746    3758 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-286000-m03 && echo "ha-286000-m03" | sudo tee /etc/hostname
	I0816 10:03:55.411701    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-286000-m03
	
	I0816 10:03:55.411715    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:55.411851    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:55.411965    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.412057    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.412140    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:55.412274    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:03:55.412418    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:03:55.412429    3758 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-286000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-286000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-286000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 10:03:55.485102    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 10:03:55.485118    3758 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19461-1276/.minikube CaCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19461-1276/.minikube}
	I0816 10:03:55.485127    3758 buildroot.go:174] setting up certificates
	I0816 10:03:55.485134    3758 provision.go:84] configureAuth start
	I0816 10:03:55.485141    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetMachineName
	I0816 10:03:55.485284    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetIP
	I0816 10:03:55.485375    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:55.485445    3758 provision.go:143] copyHostCerts
	I0816 10:03:55.485474    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem
	I0816 10:03:55.485527    3758 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem, removing ...
	I0816 10:03:55.485533    3758 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem
	I0816 10:03:55.485654    3758 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem (1123 bytes)
	I0816 10:03:55.485864    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem
	I0816 10:03:55.485895    3758 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem, removing ...
	I0816 10:03:55.485900    3758 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem
	I0816 10:03:55.486003    3758 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem (1679 bytes)
	I0816 10:03:55.486172    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem
	I0816 10:03:55.486206    3758 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem, removing ...
	I0816 10:03:55.486215    3758 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem
	I0816 10:03:55.486286    3758 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem (1078 bytes)
	I0816 10:03:55.486435    3758 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem org=jenkins.ha-286000-m03 san=[127.0.0.1 192.169.0.7 ha-286000-m03 localhost minikube]
	I0816 10:03:55.572803    3758 provision.go:177] copyRemoteCerts
	I0816 10:03:55.572855    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 10:03:55.572869    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:55.573015    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:55.573114    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.573208    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:55.573304    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/id_rsa Username:docker}
	I0816 10:03:55.612104    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0816 10:03:55.612186    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0816 10:03:55.632484    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0816 10:03:55.632548    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 10:03:55.652677    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0816 10:03:55.652752    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0816 10:03:55.672555    3758 provision.go:87] duration metric: took 187.4165ms to configureAuth
	I0816 10:03:55.672568    3758 buildroot.go:189] setting minikube options for container-runtime
	I0816 10:03:55.672735    3758 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:03:55.672748    3758 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:03:55.672889    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:55.672982    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:55.673071    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.673157    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.673245    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:55.673371    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:03:55.673496    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:03:55.673504    3758 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0816 10:03:55.738053    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0816 10:03:55.738064    3758 buildroot.go:70] root file system type: tmpfs
	I0816 10:03:55.738156    3758 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0816 10:03:55.738167    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:55.738306    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:55.738397    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.738489    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.738573    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:55.738694    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:03:55.738841    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:03:55.738890    3758 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0816 10:03:55.813774    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0816 10:03:55.813790    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:55.813924    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:55.814019    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.814103    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.814193    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:55.814320    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:03:55.814470    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:03:55.814484    3758 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0816 10:03:57.356529    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0816 10:03:57.356544    3758 main.go:141] libmachine: Checking connection to Docker...
	I0816 10:03:57.356549    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetURL
	I0816 10:03:57.356692    3758 main.go:141] libmachine: Docker is up and running!
	I0816 10:03:57.356700    3758 main.go:141] libmachine: Reticulating splines...
	I0816 10:03:57.356705    3758 client.go:171] duration metric: took 14.164443579s to LocalClient.Create
	I0816 10:03:57.356717    3758 start.go:167] duration metric: took 14.164482312s to libmachine.API.Create "ha-286000"
	I0816 10:03:57.356727    3758 start.go:293] postStartSetup for "ha-286000-m03" (driver="hyperkit")
	I0816 10:03:57.356734    3758 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 10:03:57.356744    3758 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:03:57.356919    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 10:03:57.356935    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:57.357030    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:57.357130    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:57.357273    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:57.357390    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/id_rsa Username:docker}
	I0816 10:03:57.398231    3758 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 10:03:57.402472    3758 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 10:03:57.402483    3758 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19461-1276/.minikube/addons for local assets ...
	I0816 10:03:57.402571    3758 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19461-1276/.minikube/files for local assets ...
	I0816 10:03:57.402723    3758 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> 18312.pem in /etc/ssl/certs
	I0816 10:03:57.402729    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> /etc/ssl/certs/18312.pem
	I0816 10:03:57.402895    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 10:03:57.412226    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem --> /etc/ssl/certs/18312.pem (1708 bytes)
	I0816 10:03:57.443242    3758 start.go:296] duration metric: took 86.507779ms for postStartSetup
	I0816 10:03:57.443268    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetConfigRaw
	I0816 10:03:57.443871    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetIP
	I0816 10:03:57.444031    3758 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:03:57.444386    3758 start.go:128] duration metric: took 14.284913269s to createHost
	I0816 10:03:57.444401    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:57.444490    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:57.444568    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:57.444658    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:57.444732    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:57.444842    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:03:57.444963    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:03:57.444971    3758 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 10:03:57.509634    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723827837.586061636
	
	I0816 10:03:57.509648    3758 fix.go:216] guest clock: 1723827837.586061636
	I0816 10:03:57.509653    3758 fix.go:229] Guest: 2024-08-16 10:03:57.586061636 -0700 PDT Remote: 2024-08-16 10:03:57.444395 -0700 PDT m=+129.370490857 (delta=141.666636ms)
	I0816 10:03:57.509665    3758 fix.go:200] guest clock delta is within tolerance: 141.666636ms
	I0816 10:03:57.509669    3758 start.go:83] releasing machines lock for "ha-286000-m03", held for 14.350339851s
	I0816 10:03:57.509691    3758 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:03:57.509832    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetIP
	I0816 10:03:57.542277    3758 out.go:177] * Found network options:
	I0816 10:03:57.562266    3758 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6
	W0816 10:03:57.583040    3758 proxy.go:119] fail to check proxy env: Error ip not in block
	W0816 10:03:57.583073    3758 proxy.go:119] fail to check proxy env: Error ip not in block
	I0816 10:03:57.583092    3758 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:03:57.583964    3758 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:03:57.584310    3758 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:03:57.584469    3758 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 10:03:57.584522    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	W0816 10:03:57.584570    3758 proxy.go:119] fail to check proxy env: Error ip not in block
	W0816 10:03:57.584596    3758 proxy.go:119] fail to check proxy env: Error ip not in block
	I0816 10:03:57.584708    3758 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0816 10:03:57.584735    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:57.584813    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:57.584995    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:57.585023    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:57.585164    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:57.585195    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:57.585338    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/id_rsa Username:docker}
	I0816 10:03:57.585406    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:57.585566    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/id_rsa Username:docker}
	W0816 10:03:57.623481    3758 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 10:03:57.623541    3758 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 10:03:57.670278    3758 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 10:03:57.670298    3758 start.go:495] detecting cgroup driver to use...
	I0816 10:03:57.670408    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 10:03:57.686134    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0816 10:03:57.694681    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0816 10:03:57.703134    3758 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0816 10:03:57.703198    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0816 10:03:57.711736    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 10:03:57.720041    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0816 10:03:57.728421    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 10:03:57.737252    3758 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 10:03:57.746356    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0816 10:03:57.755117    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0816 10:03:57.763962    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0816 10:03:57.772639    3758 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 10:03:57.780684    3758 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 10:03:57.788938    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:03:57.893932    3758 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0816 10:03:57.913150    3758 start.go:495] detecting cgroup driver to use...
	I0816 10:03:57.913224    3758 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0816 10:03:57.932274    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 10:03:57.945000    3758 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 10:03:57.965222    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 10:03:57.977460    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 10:03:57.988259    3758 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0816 10:03:58.006552    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 10:03:58.017261    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 10:03:58.032545    3758 ssh_runner.go:195] Run: which cri-dockerd
	I0816 10:03:58.035464    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0816 10:03:58.042742    3758 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0816 10:03:58.056747    3758 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0816 10:03:58.163471    3758 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0816 10:03:58.281020    3758 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0816 10:03:58.281044    3758 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0816 10:03:58.294992    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:03:58.399122    3758 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0816 10:04:59.415916    3758 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.017935847s)
	I0816 10:04:59.415981    3758 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0816 10:04:59.453653    3758 out.go:201] 
	W0816 10:04:59.474040    3758 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 16 17:03:56 ha-286000-m03 systemd[1]: Starting Docker Application Container Engine...
	Aug 16 17:03:56 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:56.200976866Z" level=info msg="Starting up"
	Aug 16 17:03:56 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:56.201455258Z" level=info msg="containerd not running, starting managed containerd"
	Aug 16 17:03:56 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:56.201908035Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=519
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.218394236Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.233514110Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.233584256Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.233654513Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.233690141Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.233768300Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.233806284Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.233955562Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.233996222Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.234026670Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.234054929Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.234137090Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.234382642Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.235934793Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.235985918Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.236117022Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.236159609Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.236256962Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.236327154Z" level=info msg="metadata content store policy set" policy=shared
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.238422470Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.238509436Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.238555940Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.238594343Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.238633954Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.238734892Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.238944917Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239083528Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239123172Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239153894Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239184820Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239214617Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239248728Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239285992Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239333696Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239368624Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239411950Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239451554Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239490333Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239523121Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239554681Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239589296Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239621390Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239653930Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239683266Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239712579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239742056Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239772828Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239801901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239830636Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239859570Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239893189Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239929555Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239960582Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239991787Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240076099Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240121809Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240153088Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240182438Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240210918Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240240067Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240268586Z" level=info msg="NRI interface is disabled by configuration."
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240520719Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240609661Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240710485Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240795617Z" level=info msg="containerd successfully booted in 0.023088s"
	Aug 16 17:03:57 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:57.221812848Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 16 17:03:57 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:57.228854621Z" level=info msg="Loading containers: start."
	Aug 16 17:03:57 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:57.312023793Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 16 17:03:57 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:57.390459780Z" level=info msg="Loading containers: done."
	Aug 16 17:03:57 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:57.404746652Z" level=info msg="Docker daemon" commit=f9522e5 containerd-snapshotter=false storage-driver=overlay2 version=27.1.2
	Aug 16 17:03:57 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:57.404864935Z" level=info msg="Daemon has completed initialization"
	Aug 16 17:03:57 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:57.430646618Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 16 17:03:57 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:57.430810646Z" level=info msg="API listen on [::]:2376"
	Aug 16 17:03:57 ha-286000-m03 systemd[1]: Started Docker Application Container Engine.
	Aug 16 17:03:58 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:58.488220464Z" level=info msg="Processing signal 'terminated'"
	Aug 16 17:03:58 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:58.489144852Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 16 17:03:58 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:58.489473398Z" level=info msg="Daemon shutdown complete"
	Aug 16 17:03:58 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:58.489515121Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 16 17:03:58 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:58.489527809Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 16 17:03:58 ha-286000-m03 systemd[1]: Stopping Docker Application Container Engine...
	Aug 16 17:03:59 ha-286000-m03 systemd[1]: docker.service: Deactivated successfully.
	Aug 16 17:03:59 ha-286000-m03 systemd[1]: Stopped Docker Application Container Engine.
	Aug 16 17:03:59 ha-286000-m03 systemd[1]: Starting Docker Application Container Engine...
	Aug 16 17:03:59 ha-286000-m03 dockerd[913]: time="2024-08-16T17:03:59.520198872Z" level=info msg="Starting up"
	Aug 16 17:04:59 ha-286000-m03 dockerd[913]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 16 17:04:59 ha-286000-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 16 17:04:59 ha-286000-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 16 17:04:59 ha-286000-m03 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0816 10:04:59.474111    3758 out.go:270] * 
	W0816 10:04:59.474938    3758 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 10:04:59.558192    3758 out.go:201] 
	
	
	==> Docker <==
	Aug 16 17:02:49 ha-286000 dockerd[1241]: time="2024-08-16T17:02:49.128944226Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:02:49 ha-286000 dockerd[1241]: time="2024-08-16T17:02:49.129032209Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:02:49 ha-286000 cri-dockerd[1134]: time="2024-08-16T17:02:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/482990a4b00e6dcb249010bfdf827ea83c634257748fd71796c7a7f7994180a2/resolv.conf as [nameserver 192.169.0.1]"
	Aug 16 17:02:49 ha-286000 dockerd[1241]: time="2024-08-16T17:02:49.342160503Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 16 17:02:49 ha-286000 dockerd[1241]: time="2024-08-16T17:02:49.342228348Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 16 17:02:49 ha-286000 dockerd[1241]: time="2024-08-16T17:02:49.342241389Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:02:49 ha-286000 dockerd[1241]: time="2024-08-16T17:02:49.342329631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:02:50 ha-286000 dockerd[1241]: time="2024-08-16T17:02:50.631608682Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 16 17:02:50 ha-286000 dockerd[1241]: time="2024-08-16T17:02:50.631681064Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 16 17:02:50 ha-286000 dockerd[1241]: time="2024-08-16T17:02:50.631700294Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:02:50 ha-286000 dockerd[1241]: time="2024-08-16T17:02:50.631803753Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:02:50 ha-286000 dockerd[1241]: time="2024-08-16T17:02:50.636334140Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 16 17:02:50 ha-286000 dockerd[1241]: time="2024-08-16T17:02:50.636542015Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 16 17:02:50 ha-286000 dockerd[1241]: time="2024-08-16T17:02:50.636768594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:02:50 ha-286000 dockerd[1241]: time="2024-08-16T17:02:50.637206626Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:02:50 ha-286000 cri-dockerd[1134]: time="2024-08-16T17:02:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fbd84fb813c9034ce56be933a9dc0c8539c5c831abbd163996da762065f0c208/resolv.conf as [nameserver 192.169.0.1]"
	Aug 16 17:02:50 ha-286000 cri-dockerd[1134]: time="2024-08-16T17:02:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/452942e267927b3a15327ece33bffe6fb305db22e6a72ff9b65d4acfe89f3891/resolv.conf as [nameserver 192.169.0.1]"
	Aug 16 17:02:50 ha-286000 dockerd[1241]: time="2024-08-16T17:02:50.842515621Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 16 17:02:50 ha-286000 dockerd[1241]: time="2024-08-16T17:02:50.842901741Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 16 17:02:50 ha-286000 dockerd[1241]: time="2024-08-16T17:02:50.843146100Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:02:50 ha-286000 dockerd[1241]: time="2024-08-16T17:02:50.843415719Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:02:50 ha-286000 dockerd[1241]: time="2024-08-16T17:02:50.885181891Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 16 17:02:50 ha-286000 dockerd[1241]: time="2024-08-16T17:02:50.885227629Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 16 17:02:50 ha-286000 dockerd[1241]: time="2024-08-16T17:02:50.885240492Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:02:50 ha-286000 dockerd[1241]: time="2024-08-16T17:02:50.885438543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	bcd7170b050a5       cbb01a7bd410d                                                                                       2 minutes ago       Running             coredns                   0                   452942e267927       coredns-6f6b679f8f-rfbz7
	60d3d03e297ce       cbb01a7bd410d                                                                                       2 minutes ago       Running             coredns                   0                   fbd84fb813c90       coredns-6f6b679f8f-2kqjf
	f55b59f53c6eb       6e38f40d628db                                                                                       2 minutes ago       Running             storage-provisioner       0                   482990a4b00e6       storage-provisioner
	2677394dcd66f       kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166            2 minutes ago       Running             kindnet-cni               0                   afaf657e85c32       kindnet-whqxb
	81f6c96d46494       ad83b2ca7b09e                                                                                       2 minutes ago       Running             kube-proxy                0                   dfb822fc27b5e       kube-proxy-w4nt2
	cafa34c562392       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f   2 minutes ago       Running             kube-vip                  0                   69fba128b04a6       kube-vip-ha-286000
	8f5867ee99d9b       045733566833c                                                                                       2 minutes ago       Running             kube-controller-manager   0                   e87fd3e77384f       kube-controller-manager-ha-286000
	f7b2e9efdd94f       1766f54c897f0                                                                                       2 minutes ago       Running             kube-scheduler            0                   8e2b186980aff       kube-scheduler-ha-286000
	d8dadff6cec78       2e96e5913fc06                                                                                       2 minutes ago       Running             etcd                      0                   cdb14ff7d8896       etcd-ha-286000
	5f3adc202a7a2       604f5db92eaa8                                                                                       2 minutes ago       Running             kube-apiserver            0                   818ee6dafe6c9       kube-apiserver-ha-286000
	
	
	==> coredns [60d3d03e297c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:37081 - 62351 "HINFO IN 7437422972060865489.3041931607585121070. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.009759141s
	
	
	==> coredns [bcd7170b050a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:47813 - 35687 "HINFO IN 8431179596625010309.1478595720938230641. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.009077429s
	
	
	==> describe nodes <==
	Name:               ha-286000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-286000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd
	                    minikube.k8s.io/name=ha-286000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_16T10_02_26_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 17:02:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-286000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 17:04:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 17:02:56 +0000   Fri, 16 Aug 2024 17:02:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 17:02:56 +0000   Fri, 16 Aug 2024 17:02:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 17:02:56 +0000   Fri, 16 Aug 2024 17:02:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 17:02:56 +0000   Fri, 16 Aug 2024 17:02:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.5
	  Hostname:    ha-286000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 a56c711da9894c3c86f2a7af9fec2b53
	  System UUID:                ad96408c-0000-0000-89eb-d74e5b68d297
	  Boot ID:                    23e4d4eb-a603-45bf-aca4-fb0893407f5a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-2kqjf             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     2m32s
	  kube-system                 coredns-6f6b679f8f-rfbz7             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     2m32s
	  kube-system                 etcd-ha-286000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         2m36s
	  kube-system                 kindnet-whqxb                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      2m32s
	  kube-system                 kube-apiserver-ha-286000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m37s
	  kube-system                 kube-controller-manager-ha-286000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m36s
	  kube-system                 kube-proxy-w4nt2                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m32s
	  kube-system                 kube-scheduler-ha-286000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m36s
	  kube-system                 kube-vip-ha-286000                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m38s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m30s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  2m43s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m43s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m43s (x8 over 2m43s)  kubelet          Node ha-286000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m43s (x8 over 2m43s)  kubelet          Node ha-286000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m43s (x7 over 2m43s)  kubelet          Node ha-286000 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m36s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m36s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m36s                  kubelet          Node ha-286000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m36s                  kubelet          Node ha-286000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m36s                  kubelet          Node ha-286000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2m33s                  node-controller  Node ha-286000 event: Registered Node ha-286000 in Controller
	  Normal  NodeReady                2m13s                  kubelet          Node ha-286000 status is now: NodeReady
	  Normal  RegisteredNode           93s                    node-controller  Node ha-286000 event: Registered Node ha-286000 in Controller
	
	
	Name:               ha-286000-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-286000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd
	                    minikube.k8s.io/name=ha-286000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_16T10_03_22_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 17:03:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-286000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 17:04:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 17:03:50 +0000   Fri, 16 Aug 2024 17:03:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 17:03:50 +0000   Fri, 16 Aug 2024 17:03:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 17:03:50 +0000   Fri, 16 Aug 2024 17:03:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 17:03:50 +0000   Fri, 16 Aug 2024 17:03:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.6
	  Hostname:    ha-286000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 42bf4a1b451f44ad925f50a6a94e4cff
	  System UUID:                f7634935-0000-0000-a751-ac72999da031
	  Boot ID:                    e82d142e-37b9-4938-81bc-f5bc1a2db23f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-ha-286000-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         99s
	  kube-system                 kindnet-t9kjf                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      101s
	  kube-system                 kube-apiserver-ha-286000-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         99s
	  kube-system                 kube-controller-manager-ha-286000-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         99s
	  kube-system                 kube-proxy-pt669                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 kube-scheduler-ha-286000-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 kube-vip-ha-286000-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 96s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  101s (x8 over 101s)  kubelet          Node ha-286000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    101s (x8 over 101s)  kubelet          Node ha-286000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     101s (x7 over 101s)  kubelet          Node ha-286000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  101s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           98s                  node-controller  Node ha-286000-m02 event: Registered Node ha-286000-m02 in Controller
	  Normal  RegisteredNode           93s                  node-controller  Node ha-286000-m02 event: Registered Node ha-286000-m02 in Controller
	
	
	==> dmesg <==
	[  +0.006640] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.760553] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +1.351722] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000003] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Aug16 17:02] systemd-fstab-generator[519]: Ignoring "noauto" option for root device
	[  +0.114039] systemd-fstab-generator[531]: Ignoring "noauto" option for root device
	[  +1.207693] kauditd_printk_skb: 42 callbacks suppressed
	[  +0.611653] systemd-fstab-generator[786]: Ignoring "noauto" option for root device
	[  +0.265175] systemd-fstab-generator[846]: Ignoring "noauto" option for root device
	[  +0.101006] systemd-fstab-generator[858]: Ignoring "noauto" option for root device
	[  +0.123308] systemd-fstab-generator[872]: Ignoring "noauto" option for root device
	[  +2.497195] systemd-fstab-generator[1086]: Ignoring "noauto" option for root device
	[  +0.110299] systemd-fstab-generator[1098]: Ignoring "noauto" option for root device
	[  +0.095670] systemd-fstab-generator[1110]: Ignoring "noauto" option for root device
	[  +0.122299] systemd-fstab-generator[1126]: Ignoring "noauto" option for root device
	[  +3.636333] systemd-fstab-generator[1227]: Ignoring "noauto" option for root device
	[  +0.056190] kauditd_printk_skb: 233 callbacks suppressed
	[  +2.505796] systemd-fstab-generator[1479]: Ignoring "noauto" option for root device
	[  +3.634449] systemd-fstab-generator[1611]: Ignoring "noauto" option for root device
	[  +0.055756] kauditd_printk_skb: 70 callbacks suppressed
	[  +6.910973] systemd-fstab-generator[2107]: Ignoring "noauto" option for root device
	[  +0.079351] kauditd_printk_skb: 72 callbacks suppressed
	[  +9.264773] kauditd_printk_skb: 51 callbacks suppressed
	[Aug16 17:03] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [d8dadff6cec7] <==
	{"level":"info","ts":"2024-08-16T17:02:20.424522Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-16T17:02:20.426652Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-16T17:02:30.726665Z","caller":"traceutil/trace.go:171","msg":"trace[1707882676] transaction","detail":"{read_only:false; response_revision:410; number_of_response:1; }","duration":"136.026355ms","start":"2024-08-16T17:02:30.590619Z","end":"2024-08-16T17:02:30.726645Z","steps":["trace[1707882676] 'process raft request'  (duration: 40.573628ms)","trace[1707882676] 'compare'  (duration: 94.944344ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-16T17:02:30.726754Z","caller":"traceutil/trace.go:171","msg":"trace[1410430809] transaction","detail":"{read_only:false; response_revision:411; number_of_response:1; }","duration":"135.576729ms","start":"2024-08-16T17:02:30.591165Z","end":"2024-08-16T17:02:30.726742Z","steps":["trace[1410430809] 'process raft request'  (duration: 135.198738ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-16T17:03:20.601620Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 switched to configuration voters=(13314548521573537860) learners=(676450350361439540)"}
	{"level":"info","ts":"2024-08-16T17:03:20.601722Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"b73189effde9bc63","local-member-id":"b8c6c7563d17d844","added-peer-id":"9633c02797b6d34","added-peer-peer-urls":["https://192.169.0.6:2380"]}
	{"level":"info","ts":"2024-08-16T17:03:20.601869Z","caller":"rafthttp/peer.go:133","msg":"starting remote peer","remote-peer-id":"9633c02797b6d34"}
	{"level":"info","ts":"2024-08-16T17:03:20.601961Z","caller":"rafthttp/pipeline.go:72","msg":"started HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"9633c02797b6d34"}
	{"level":"info","ts":"2024-08-16T17:03:20.602332Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"9633c02797b6d34"}
	{"level":"info","ts":"2024-08-16T17:03:20.602493Z","caller":"rafthttp/peer.go:137","msg":"started remote peer","remote-peer-id":"9633c02797b6d34"}
	{"level":"info","ts":"2024-08-16T17:03:20.602718Z","caller":"rafthttp/transport.go:317","msg":"added remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"9633c02797b6d34","remote-peer-urls":["https://192.169.0.6:2380"]}
	{"level":"info","ts":"2024-08-16T17:03:20.606154Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"9633c02797b6d34"}
	{"level":"info","ts":"2024-08-16T17:03:20.606173Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"9633c02797b6d34"}
	{"level":"info","ts":"2024-08-16T17:03:20.606385Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"9633c02797b6d34"}
	{"level":"info","ts":"2024-08-16T17:03:20.607792Z","caller":"etcdserver/server.go:1996","msg":"applied a configuration change through raft","local-member-id":"b8c6c7563d17d844","raft-conf-change":"ConfChangeAddLearnerNode","raft-conf-change-node-id":"9633c02797b6d34"}
	{"level":"info","ts":"2024-08-16T17:03:21.553074Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"9633c02797b6d34"}
	{"level":"info","ts":"2024-08-16T17:03:21.553169Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"9633c02797b6d34"}
	{"level":"info","ts":"2024-08-16T17:03:21.561367Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"9633c02797b6d34","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-08-16T17:03:21.561449Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"9633c02797b6d34"}
	{"level":"info","ts":"2024-08-16T17:03:21.561528Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"9633c02797b6d34"}
	{"level":"info","ts":"2024-08-16T17:03:21.571776Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"9633c02797b6d34","stream-type":"stream Message"}
	{"level":"info","ts":"2024-08-16T17:03:21.571882Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"9633c02797b6d34"}
	{"level":"info","ts":"2024-08-16T17:03:22.121813Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 switched to configuration voters=(676450350361439540 13314548521573537860)"}
	{"level":"info","ts":"2024-08-16T17:03:22.121979Z","caller":"membership/cluster.go:535","msg":"promote member","cluster-id":"b73189effde9bc63","local-member-id":"b8c6c7563d17d844"}
	{"level":"info","ts":"2024-08-16T17:03:22.122011Z","caller":"etcdserver/server.go:1996","msg":"applied a configuration change through raft","local-member-id":"b8c6c7563d17d844","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"9633c02797b6d34"}
	
	
	==> kernel <==
	 17:05:01 up 3 min,  0 users,  load average: 0.33, 0.29, 0.12
	Linux ha-286000 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [2677394dcd66] <==
	I0816 17:03:55.224689       1 main.go:322] Node ha-286000-m02 has CIDR [10.244.1.0/24] 
	I0816 17:04:05.225300       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0816 17:04:05.225474       1 main.go:299] handling current node
	I0816 17:04:05.225517       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0816 17:04:05.225703       1 main.go:322] Node ha-286000-m02 has CIDR [10.244.1.0/24] 
	I0816 17:04:15.232596       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0816 17:04:15.232635       1 main.go:299] handling current node
	I0816 17:04:15.232646       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0816 17:04:15.232651       1 main.go:322] Node ha-286000-m02 has CIDR [10.244.1.0/24] 
	I0816 17:04:25.223146       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0816 17:04:25.223205       1 main.go:299] handling current node
	I0816 17:04:25.223222       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0816 17:04:25.223230       1 main.go:322] Node ha-286000-m02 has CIDR [10.244.1.0/24] 
	I0816 17:04:35.223742       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0816 17:04:35.223876       1 main.go:299] handling current node
	I0816 17:04:35.223918       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0816 17:04:35.223945       1 main.go:322] Node ha-286000-m02 has CIDR [10.244.1.0/24] 
	I0816 17:04:45.223419       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0816 17:04:45.223450       1 main.go:299] handling current node
	I0816 17:04:45.223466       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0816 17:04:45.223472       1 main.go:322] Node ha-286000-m02 has CIDR [10.244.1.0/24] 
	I0816 17:04:55.230275       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0816 17:04:55.230307       1 main.go:299] handling current node
	I0816 17:04:55.230323       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0816 17:04:55.230330       1 main.go:322] Node ha-286000-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [5f3adc202a7a] <==
	I0816 17:02:22.200067       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0816 17:02:22.200140       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0816 17:02:22.200254       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0816 17:02:22.200593       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0816 17:02:22.200675       1 aggregator.go:171] initial CRD sync complete...
	I0816 17:02:22.200751       1 autoregister_controller.go:144] Starting autoregister controller
	I0816 17:02:22.200815       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0816 17:02:22.200855       1 cache.go:39] Caches are synced for autoregister controller
	E0816 17:02:22.203287       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I0816 17:02:22.246497       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0816 17:02:23.102546       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0816 17:02:23.105482       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0816 17:02:23.105513       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0816 17:02:23.438732       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0816 17:02:23.465745       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0816 17:02:23.506778       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0816 17:02:23.510486       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.5]
	I0816 17:02:23.511071       1 controller.go:615] quota admission added evaluator for: endpoints
	I0816 17:02:23.513769       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0816 17:02:24.114339       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0816 17:02:25.748425       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0816 17:02:25.762462       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0816 17:02:25.770706       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0816 17:02:29.466806       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0816 17:02:29.715365       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [8f5867ee99d9] <==
	I0816 17:02:48.802075       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="42.36µs"
	I0816 17:02:48.832309       1 node_lifecycle_controller.go:1055] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0816 17:02:51.973991       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="228.262µs"
	I0816 17:02:52.009660       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="19.318421ms"
	I0816 17:02:52.024984       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="15.100214ms"
	I0816 17:02:52.026026       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="142.454µs"
	I0816 17:02:56.405607       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000"
	I0816 17:03:20.527823       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-286000-m02\" does not exist"
	I0816 17:03:20.538070       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-286000-m02" podCIDRs=["10.244.1.0/24"]
	I0816 17:03:20.538196       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000-m02"
	I0816 17:03:20.538288       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000-m02"
	I0816 17:03:20.543294       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000-m02"
	I0816 17:03:20.583713       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000-m02"
	I0816 17:03:22.174331       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000-m02"
	I0816 17:03:22.637945       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000-m02"
	I0816 17:03:22.733197       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000-m02"
	I0816 17:03:23.840794       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-286000-m02"
	I0816 17:03:23.936982       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000-m02"
	I0816 17:03:28.116114       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000-m02"
	I0816 17:03:28.213864       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000-m02"
	I0816 17:03:30.558882       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000-m02"
	I0816 17:03:39.011535       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000-m02"
	I0816 17:03:39.020670       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000-m02"
	I0816 17:03:43.141912       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000-m02"
	I0816 17:03:50.726543       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000-m02"
	
	
	==> kube-proxy [81f6c96d4649] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0816 17:02:30.214569       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0816 17:02:30.222978       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E0816 17:02:30.223011       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0816 17:02:30.267797       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0816 17:02:30.267825       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0816 17:02:30.267843       1 server_linux.go:169] "Using iptables Proxier"
	I0816 17:02:30.270154       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0816 17:02:30.270311       1 server.go:483] "Version info" version="v1.31.0"
	I0816 17:02:30.270319       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 17:02:30.271352       1 config.go:197] "Starting service config controller"
	I0816 17:02:30.271358       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0816 17:02:30.271372       1 config.go:104] "Starting endpoint slice config controller"
	I0816 17:02:30.271375       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0816 17:02:30.271598       1 config.go:326] "Starting node config controller"
	I0816 17:02:30.271603       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0816 17:02:30.371824       1 shared_informer.go:320] Caches are synced for node config
	I0816 17:02:30.371859       1 shared_informer.go:320] Caches are synced for service config
	I0816 17:02:30.371867       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [f7b2e9efdd94] <==
	W0816 17:02:22.176666       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0816 17:02:22.176699       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 17:02:22.176803       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0816 17:02:22.176836       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 17:02:22.176847       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0816 17:02:22.176852       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 17:02:22.176967       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0816 17:02:22.176999       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0816 17:02:22.177013       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0816 17:02:22.179079       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0816 17:02:22.179253       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0816 17:02:22.179322       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0816 17:02:23.072963       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0816 17:02:23.073256       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 17:02:23.081000       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0816 17:02:23.081176       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 17:02:23.142220       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0816 17:02:23.142263       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 17:02:23.166962       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0816 17:02:23.167003       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 17:02:23.255186       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 17:02:23.255230       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 17:02:23.456273       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0816 17:02:23.456447       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0816 17:02:26.460413       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 16 17:02:48 ha-286000 kubelet[2114]: I0816 17:02:48.756093    2114 kubelet_node_status.go:488] "Fast updating node status as it just became ready"
	Aug 16 17:02:48 ha-286000 kubelet[2114]: W0816 17:02:48.780824    2114 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ha-286000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ha-286000' and this object
	Aug 16 17:02:48 ha-286000 kubelet[2114]: E0816 17:02:48.780871    2114 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:ha-286000\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ha-286000' and this object" logger="UnhandledError"
	Aug 16 17:02:48 ha-286000 kubelet[2114]: I0816 17:02:48.876250    2114 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dvkg\" (UniqueName: \"kubernetes.io/projected/a593e27a-e38b-46c7-a603-44963c31c095-kube-api-access-8dvkg\") pod \"coredns-6f6b679f8f-rfbz7\" (UID: \"a593e27a-e38b-46c7-a603-44963c31c095\") " pod="kube-system/coredns-6f6b679f8f-rfbz7"
	Aug 16 17:02:48 ha-286000 kubelet[2114]: I0816 17:02:48.876379    2114 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fcgl\" (UniqueName: \"kubernetes.io/projected/be18cd09-3e3b-4749-bf29-7001a879f593-kube-api-access-7fcgl\") pod \"coredns-6f6b679f8f-2kqjf\" (UID: \"be18cd09-3e3b-4749-bf29-7001a879f593\") " pod="kube-system/coredns-6f6b679f8f-2kqjf"
	Aug 16 17:02:48 ha-286000 kubelet[2114]: I0816 17:02:48.876439    2114 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a593e27a-e38b-46c7-a603-44963c31c095-config-volume\") pod \"coredns-6f6b679f8f-rfbz7\" (UID: \"a593e27a-e38b-46c7-a603-44963c31c095\") " pod="kube-system/coredns-6f6b679f8f-rfbz7"
	Aug 16 17:02:48 ha-286000 kubelet[2114]: I0816 17:02:48.876502    2114 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/4805d53b-2db3-4092-a3f2-d4a854e93adc-tmp\") pod \"storage-provisioner\" (UID: \"4805d53b-2db3-4092-a3f2-d4a854e93adc\") " pod="kube-system/storage-provisioner"
	Aug 16 17:02:48 ha-286000 kubelet[2114]: I0816 17:02:48.876598    2114 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrs7b\" (UniqueName: \"kubernetes.io/projected/4805d53b-2db3-4092-a3f2-d4a854e93adc-kube-api-access-lrs7b\") pod \"storage-provisioner\" (UID: \"4805d53b-2db3-4092-a3f2-d4a854e93adc\") " pod="kube-system/storage-provisioner"
	Aug 16 17:02:48 ha-286000 kubelet[2114]: I0816 17:02:48.876669    2114 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/be18cd09-3e3b-4749-bf29-7001a879f593-config-volume\") pod \"coredns-6f6b679f8f-2kqjf\" (UID: \"be18cd09-3e3b-4749-bf29-7001a879f593\") " pod="kube-system/coredns-6f6b679f8f-2kqjf"
	Aug 16 17:02:49 ha-286000 kubelet[2114]: E0816 17:02:49.978099    2114 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition
	Aug 16 17:02:49 ha-286000 kubelet[2114]: E0816 17:02:49.978147    2114 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition
	Aug 16 17:02:49 ha-286000 kubelet[2114]: E0816 17:02:49.978867    2114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/be18cd09-3e3b-4749-bf29-7001a879f593-config-volume podName:be18cd09-3e3b-4749-bf29-7001a879f593 nodeName:}" failed. No retries permitted until 2024-08-16 17:02:50.478839444 +0000 UTC m=+24.982282448 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/be18cd09-3e3b-4749-bf29-7001a879f593-config-volume") pod "coredns-6f6b679f8f-2kqjf" (UID: "be18cd09-3e3b-4749-bf29-7001a879f593") : failed to sync configmap cache: timed out waiting for the condition
	Aug 16 17:02:49 ha-286000 kubelet[2114]: E0816 17:02:49.979061    2114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a593e27a-e38b-46c7-a603-44963c31c095-config-volume podName:a593e27a-e38b-46c7-a603-44963c31c095 nodeName:}" failed. No retries permitted until 2024-08-16 17:02:50.479042512 +0000 UTC m=+24.982485512 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a593e27a-e38b-46c7-a603-44963c31c095-config-volume") pod "coredns-6f6b679f8f-rfbz7" (UID: "a593e27a-e38b-46c7-a603-44963c31c095") : failed to sync configmap cache: timed out waiting for the condition
	Aug 16 17:02:51 ha-286000 kubelet[2114]: I0816 17:02:51.972528    2114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=21.972513399 podStartE2EDuration="21.972513399s" podCreationTimestamp="2024-08-16 17:02:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-16 17:02:49.837406668 +0000 UTC m=+24.340849657" watchObservedRunningTime="2024-08-16 17:02:51.972513399 +0000 UTC m=+26.475956386"
	Aug 16 17:02:51 ha-286000 kubelet[2114]: I0816 17:02:51.989711    2114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-rfbz7" podStartSLOduration=22.989695852 podStartE2EDuration="22.989695852s" podCreationTimestamp="2024-08-16 17:02:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-16 17:02:51.97290813 +0000 UTC m=+26.476351124" watchObservedRunningTime="2024-08-16 17:02:51.989695852 +0000 UTC m=+26.493138841"
	Aug 16 17:03:25 ha-286000 kubelet[2114]: E0816 17:03:25.676091    2114 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 16 17:03:25 ha-286000 kubelet[2114]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 16 17:03:25 ha-286000 kubelet[2114]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 16 17:03:25 ha-286000 kubelet[2114]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 16 17:03:25 ha-286000 kubelet[2114]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 16 17:04:25 ha-286000 kubelet[2114]: E0816 17:04:25.676929    2114 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 16 17:04:25 ha-286000 kubelet[2114]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 16 17:04:25 ha-286000 kubelet[2114]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 16 17:04:25 ha-286000 kubelet[2114]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 16 17:04:25 ha-286000 kubelet[2114]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-286000 -n ha-286000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-286000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StartCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StartCluster (194.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (714.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-286000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-286000 -- rollout status deployment/busybox
E0816 10:05:35.669003    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/functional-373000/client.crt: no such file or directory" logger="UnhandledError"
E0816 10:05:35.676850    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/functional-373000/client.crt: no such file or directory" logger="UnhandledError"
E0816 10:05:35.689880    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/functional-373000/client.crt: no such file or directory" logger="UnhandledError"
E0816 10:05:35.711931    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/functional-373000/client.crt: no such file or directory" logger="UnhandledError"
E0816 10:05:35.753913    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/functional-373000/client.crt: no such file or directory" logger="UnhandledError"
E0816 10:05:35.836080    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/functional-373000/client.crt: no such file or directory" logger="UnhandledError"
E0816 10:05:35.998898    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/functional-373000/client.crt: no such file or directory" logger="UnhandledError"
E0816 10:05:36.320597    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/functional-373000/client.crt: no such file or directory" logger="UnhandledError"
E0816 10:05:36.962185    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/functional-373000/client.crt: no such file or directory" logger="UnhandledError"
E0816 10:05:38.245204    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/functional-373000/client.crt: no such file or directory" logger="UnhandledError"
E0816 10:05:40.806797    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/functional-373000/client.crt: no such file or directory" logger="UnhandledError"
E0816 10:05:45.928085    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/functional-373000/client.crt: no such file or directory" logger="UnhandledError"
E0816 10:05:56.171212    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/functional-373000/client.crt: no such file or directory" logger="UnhandledError"
E0816 10:06:16.654027    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/functional-373000/client.crt: no such file or directory" logger="UnhandledError"
E0816 10:06:32.756765    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/addons-725000/client.crt: no such file or directory" logger="UnhandledError"
E0816 10:06:57.614733    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/functional-373000/client.crt: no such file or directory" logger="UnhandledError"
E0816 10:08:19.534759    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/functional-373000/client.crt: no such file or directory" logger="UnhandledError"
E0816 10:10:35.662903    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/functional-373000/client.crt: no such file or directory" logger="UnhandledError"
E0816 10:11:03.374626    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/functional-373000/client.crt: no such file or directory" logger="UnhandledError"
E0816 10:11:32.751141    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/addons-725000/client.crt: no such file or directory" logger="UnhandledError"
E0816 10:12:55.831833    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/addons-725000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:133: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p ha-286000 -- rollout status deployment/busybox: exit status 1 (10m2.264364852s)

                                                
                                                
-- stdout --
	Waiting for deployment "busybox" rollout to finish: 0 of 4 updated replicas are available...
	Waiting for deployment "busybox" rollout to finish: 0 of 3 updated replicas are available...
	Waiting for deployment "busybox" rollout to finish: 0 out of 3 new replicas have been updated...
	Waiting for deployment "busybox" rollout to finish: 0 of 5 updated replicas are available...
	Waiting for deployment "busybox" rollout to finish: 0 of 4 updated replicas are available...
	Waiting for deployment "busybox" rollout to finish: 0 of 3 updated replicas are available...
	Waiting for deployment "busybox" rollout to finish: 2 out of 3 new replicas have been updated...
	Waiting for deployment "busybox" rollout to finish: 0 of 3 updated replicas are available...
	Waiting for deployment "busybox" rollout to finish: 1 of 3 updated replicas are available...
	Waiting for deployment "busybox" rollout to finish: 2 of 3 updated replicas are available...

                                                
                                                
-- /stdout --
** stderr ** 
	error: deployment "busybox" exceeded its progress deadline

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-286000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-286000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-286000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-286000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-286000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-286000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-286000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
E0816 10:15:35.658074    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/functional-373000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-286000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-286000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-286000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
E0816 10:16:32.746965    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/addons-725000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-286000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:159: failed to resolve pod IPs: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:163: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-286000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-286000 -- exec busybox-7dff88458-99xmp -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p ha-286000 -- exec busybox-7dff88458-99xmp -- nslookup kubernetes.io: exit status 1 (123.762896ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): pod busybox-7dff88458-99xmp does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:173: Pod busybox-7dff88458-99xmp could not resolve 'kubernetes.io': exit status 1
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-286000 -- exec busybox-7dff88458-dvmvk -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-286000 -- exec busybox-7dff88458-k9m92 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-286000 -- exec busybox-7dff88458-99xmp -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p ha-286000 -- exec busybox-7dff88458-99xmp -- nslookup kubernetes.default: exit status 1 (123.944174ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): pod busybox-7dff88458-99xmp does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:183: Pod busybox-7dff88458-99xmp could not resolve 'kubernetes.default': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-286000 -- exec busybox-7dff88458-dvmvk -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-286000 -- exec busybox-7dff88458-k9m92 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-286000 -- exec busybox-7dff88458-99xmp -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p ha-286000 -- exec busybox-7dff88458-99xmp -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (123.949152ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): pod busybox-7dff88458-99xmp does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:191: Pod busybox-7dff88458-99xmp could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-286000 -- exec busybox-7dff88458-dvmvk -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-286000 -- exec busybox-7dff88458-k9m92 -- nslookup kubernetes.default.svc.cluster.local
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-286000 -n ha-286000
helpers_test.go:244: <<< TestMultiControlPlane/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DeployApp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-286000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-286000 logs -n 25: (2.191403944s)
helpers_test.go:252: TestMultiControlPlane/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| delete  | -p functional-373000                 | functional-373000 | jenkins | v1.33.1 | 16 Aug 24 10:01 PDT | 16 Aug 24 10:01 PDT |
	| start   | -p ha-286000 --wait=true             | ha-286000         | jenkins | v1.33.1 | 16 Aug 24 10:01 PDT |                     |
	|         | --memory=2200 --ha                   |                   |         |         |                     |                     |
	|         | -v=7 --alsologtostderr               |                   |         |         |                     |                     |
	|         | --driver=hyperkit                    |                   |         |         |                     |                     |
	| kubectl | -p ha-286000 -- apply -f             | ha-286000         | jenkins | v1.33.1 | 16 Aug 24 10:05 PDT | 16 Aug 24 10:05 PDT |
	|         | ./testdata/ha/ha-pod-dns-test.yaml   |                   |         |         |                     |                     |
	| kubectl | -p ha-286000 -- rollout status       | ha-286000         | jenkins | v1.33.1 | 16 Aug 24 10:05 PDT |                     |
	|         | deployment/busybox                   |                   |         |         |                     |                     |
	| kubectl | -p ha-286000 -- get pods -o          | ha-286000         | jenkins | v1.33.1 | 16 Aug 24 10:15 PDT | 16 Aug 24 10:15 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-286000 -- get pods -o          | ha-286000         | jenkins | v1.33.1 | 16 Aug 24 10:15 PDT | 16 Aug 24 10:15 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-286000 -- get pods -o          | ha-286000         | jenkins | v1.33.1 | 16 Aug 24 10:15 PDT | 16 Aug 24 10:15 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-286000 -- get pods -o          | ha-286000         | jenkins | v1.33.1 | 16 Aug 24 10:15 PDT | 16 Aug 24 10:15 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-286000 -- get pods -o          | ha-286000         | jenkins | v1.33.1 | 16 Aug 24 10:15 PDT | 16 Aug 24 10:15 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-286000 -- get pods -o          | ha-286000         | jenkins | v1.33.1 | 16 Aug 24 10:15 PDT | 16 Aug 24 10:15 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-286000 -- get pods -o          | ha-286000         | jenkins | v1.33.1 | 16 Aug 24 10:15 PDT | 16 Aug 24 10:15 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-286000 -- get pods -o          | ha-286000         | jenkins | v1.33.1 | 16 Aug 24 10:15 PDT | 16 Aug 24 10:15 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-286000 -- get pods -o          | ha-286000         | jenkins | v1.33.1 | 16 Aug 24 10:15 PDT | 16 Aug 24 10:15 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-286000 -- get pods -o          | ha-286000         | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-286000 -- get pods -o          | ha-286000         | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-286000 -- get pods -o          | ha-286000         | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |                   |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000         | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT |                     |
	|         | busybox-7dff88458-99xmp --           |                   |         |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000         | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-dvmvk --           |                   |         |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000         | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-k9m92 --           |                   |         |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000         | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT |                     |
	|         | busybox-7dff88458-99xmp --           |                   |         |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000         | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-dvmvk --           |                   |         |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000         | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-k9m92 --           |                   |         |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000         | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT |                     |
	|         | busybox-7dff88458-99xmp -- nslookup  |                   |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000         | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-dvmvk -- nslookup  |                   |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000         | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-k9m92 -- nslookup  |                   |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |         |         |                     |                     |
	|---------|--------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 10:01:48
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 10:01:48.113296    3758 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:01:48.113462    3758 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:01:48.113467    3758 out.go:358] Setting ErrFile to fd 2...
	I0816 10:01:48.113470    3758 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:01:48.113648    3758 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1276/.minikube/bin
	I0816 10:01:48.115192    3758 out.go:352] Setting JSON to false
	I0816 10:01:48.140204    3758 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":1878,"bootTime":1723825830,"procs":433,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0816 10:01:48.140316    3758 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 10:01:48.194498    3758 out.go:177] * [ha-286000] minikube v1.33.1 on Darwin 14.6.1
	I0816 10:01:48.236650    3758 notify.go:220] Checking for updates...
	I0816 10:01:48.262360    3758 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 10:01:48.320415    3758 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1276/kubeconfig
	I0816 10:01:48.381368    3758 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0816 10:01:48.452505    3758 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 10:01:48.474398    3758 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1276/.minikube
	I0816 10:01:48.495459    3758 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 10:01:48.520120    3758 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 10:01:48.550379    3758 out.go:177] * Using the hyperkit driver based on user configuration
	I0816 10:01:48.592331    3758 start.go:297] selected driver: hyperkit
	I0816 10:01:48.592358    3758 start.go:901] validating driver "hyperkit" against <nil>
	I0816 10:01:48.592382    3758 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 10:01:48.596757    3758 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:01:48.596907    3758 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19461-1276/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0816 10:01:48.605545    3758 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0816 10:01:48.609748    3758 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:01:48.609772    3758 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0816 10:01:48.609805    3758 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 10:01:48.610046    3758 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 10:01:48.610094    3758 cni.go:84] Creating CNI manager for ""
	I0816 10:01:48.610103    3758 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0816 10:01:48.610108    3758 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0816 10:01:48.610236    3758 start.go:340] cluster config:
	{Name:ha-286000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-286000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docke
r CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0816 10:01:48.610323    3758 iso.go:125] acquiring lock: {Name:mkb430b43b5e7a8a683454482519837ed996c78d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:01:48.632334    3758 out.go:177] * Starting "ha-286000" primary control-plane node in "ha-286000" cluster
	I0816 10:01:48.653456    3758 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 10:01:48.653537    3758 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0816 10:01:48.653563    3758 cache.go:56] Caching tarball of preloaded images
	I0816 10:01:48.653777    3758 preload.go:172] Found /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0816 10:01:48.653813    3758 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 10:01:48.654332    3758 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:01:48.654370    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json: {Name:mk79fdae2e45cdfc987caaf16c5f211150e90dc5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:01:48.654995    3758 start.go:360] acquireMachinesLock for ha-286000: {Name:mk0510f0432b1e24fd07d899155e85dff8756d5a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:01:48.655106    3758 start.go:364] duration metric: took 87.458µs to acquireMachinesLock for "ha-286000"
	I0816 10:01:48.655154    3758 start.go:93] Provisioning new machine with config: &{Name:ha-286000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:ha-286000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:01:48.655246    3758 start.go:125] createHost starting for "" (driver="hyperkit")
	I0816 10:01:48.677450    3758 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0816 10:01:48.677701    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:01:48.677786    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:01:48.687593    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51004
	I0816 10:01:48.687935    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:01:48.688347    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:01:48.688357    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:01:48.688562    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:01:48.688668    3758 main.go:141] libmachine: (ha-286000) Calling .GetMachineName
	I0816 10:01:48.688765    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:01:48.688895    3758 start.go:159] libmachine.API.Create for "ha-286000" (driver="hyperkit")
	I0816 10:01:48.688916    3758 client.go:168] LocalClient.Create starting
	I0816 10:01:48.688948    3758 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem
	I0816 10:01:48.688999    3758 main.go:141] libmachine: Decoding PEM data...
	I0816 10:01:48.689012    3758 main.go:141] libmachine: Parsing certificate...
	I0816 10:01:48.689063    3758 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem
	I0816 10:01:48.689100    3758 main.go:141] libmachine: Decoding PEM data...
	I0816 10:01:48.689111    3758 main.go:141] libmachine: Parsing certificate...
	I0816 10:01:48.689128    3758 main.go:141] libmachine: Running pre-create checks...
	I0816 10:01:48.689135    3758 main.go:141] libmachine: (ha-286000) Calling .PreCreateCheck
	I0816 10:01:48.689224    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:48.689427    3758 main.go:141] libmachine: (ha-286000) Calling .GetConfigRaw
	I0816 10:01:48.698771    3758 main.go:141] libmachine: Creating machine...
	I0816 10:01:48.698794    3758 main.go:141] libmachine: (ha-286000) Calling .Create
	I0816 10:01:48.699018    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:48.699296    3758 main.go:141] libmachine: (ha-286000) DBG | I0816 10:01:48.699015    3766 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19461-1276/.minikube
	I0816 10:01:48.699450    3758 main.go:141] libmachine: (ha-286000) Downloading /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19461-1276/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0816 10:01:48.884950    3758 main.go:141] libmachine: (ha-286000) DBG | I0816 10:01:48.884833    3766 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa...
	I0816 10:01:48.986348    3758 main.go:141] libmachine: (ha-286000) DBG | I0816 10:01:48.986275    3766 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/ha-286000.rawdisk...
	I0816 10:01:48.986388    3758 main.go:141] libmachine: (ha-286000) DBG | Writing magic tar header
	I0816 10:01:48.986396    3758 main.go:141] libmachine: (ha-286000) DBG | Writing SSH key tar header
	I0816 10:01:48.987038    3758 main.go:141] libmachine: (ha-286000) DBG | I0816 10:01:48.987004    3766 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000 ...
	I0816 10:01:49.360700    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:49.360726    3758 main.go:141] libmachine: (ha-286000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/hyperkit.pid
	I0816 10:01:49.360741    3758 main.go:141] libmachine: (ha-286000) DBG | Using UUID ad96de67-e238-408c-89eb-d74e5b68d297
	I0816 10:01:49.471852    3758 main.go:141] libmachine: (ha-286000) DBG | Generated MAC 66:c8:48:4e:12:1b
	I0816 10:01:49.471873    3758 main.go:141] libmachine: (ha-286000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000
	I0816 10:01:49.471908    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"ad96de67-e238-408c-89eb-d74e5b68d297", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0000b2330)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 10:01:49.471951    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"ad96de67-e238-408c-89eb-d74e5b68d297", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0000b2330)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 10:01:49.471996    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "ad96de67-e238-408c-89eb-d74e5b68d297", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/ha-286000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/bzimage,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000"}
	I0816 10:01:49.472043    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U ad96de67-e238-408c-89eb-d74e5b68d297 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/ha-286000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/console-ring -f kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/bzimage,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000"
	I0816 10:01:49.472063    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0816 10:01:49.475179    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 DEBUG: hyperkit: Pid is 3771
	I0816 10:01:49.475583    3758 main.go:141] libmachine: (ha-286000) DBG | Attempt 0
	I0816 10:01:49.475597    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:49.475674    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:01:49.476510    3758 main.go:141] libmachine: (ha-286000) DBG | Searching for 66:c8:48:4e:12:1b in /var/db/dhcpd_leases ...
	I0816 10:01:49.476583    3758 main.go:141] libmachine: (ha-286000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0816 10:01:49.476592    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:01:49.476602    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:01:49.476612    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:01:49.482826    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0816 10:01:49.534430    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0816 10:01:49.535040    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:01:49.535056    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:01:49.535064    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:01:49.535072    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:01:49.909510    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0816 10:01:49.909527    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0816 10:01:50.024439    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:01:50.024459    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:01:50.024479    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:01:50.024494    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:01:50.025363    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:50 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0816 10:01:50.025375    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:50 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0816 10:01:51.477573    3758 main.go:141] libmachine: (ha-286000) DBG | Attempt 1
	I0816 10:01:51.477587    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:51.477660    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:01:51.478481    3758 main.go:141] libmachine: (ha-286000) DBG | Searching for 66:c8:48:4e:12:1b in /var/db/dhcpd_leases ...
	I0816 10:01:51.478504    3758 main.go:141] libmachine: (ha-286000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0816 10:01:51.478511    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:01:51.478520    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:01:51.478531    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:01:53.479582    3758 main.go:141] libmachine: (ha-286000) DBG | Attempt 2
	I0816 10:01:53.479599    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:53.479760    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:01:53.480630    3758 main.go:141] libmachine: (ha-286000) DBG | Searching for 66:c8:48:4e:12:1b in /var/db/dhcpd_leases ...
	I0816 10:01:53.480678    3758 main.go:141] libmachine: (ha-286000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0816 10:01:53.480688    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:01:53.480697    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:01:53.480710    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:01:55.482614    3758 main.go:141] libmachine: (ha-286000) DBG | Attempt 3
	I0816 10:01:55.482630    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:55.482703    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:01:55.483616    3758 main.go:141] libmachine: (ha-286000) DBG | Searching for 66:c8:48:4e:12:1b in /var/db/dhcpd_leases ...
	I0816 10:01:55.483662    3758 main.go:141] libmachine: (ha-286000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0816 10:01:55.483672    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:01:55.483679    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:01:55.483687    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:01:55.581983    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:55 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0816 10:01:55.582063    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:55 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0816 10:01:55.582075    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:55 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0816 10:01:55.606414    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:55 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0816 10:01:57.484306    3758 main.go:141] libmachine: (ha-286000) DBG | Attempt 4
	I0816 10:01:57.484322    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:57.484414    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:01:57.485196    3758 main.go:141] libmachine: (ha-286000) DBG | Searching for 66:c8:48:4e:12:1b in /var/db/dhcpd_leases ...
	I0816 10:01:57.485261    3758 main.go:141] libmachine: (ha-286000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0816 10:01:57.485273    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:01:57.485286    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:01:57.485294    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:01:59.485978    3758 main.go:141] libmachine: (ha-286000) DBG | Attempt 5
	I0816 10:01:59.486008    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:59.486192    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:01:59.487610    3758 main.go:141] libmachine: (ha-286000) DBG | Searching for 66:c8:48:4e:12:1b in /var/db/dhcpd_leases ...
	I0816 10:01:59.487709    3758 main.go:141] libmachine: (ha-286000) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0816 10:01:59.487730    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:01:59.487784    3758 main.go:141] libmachine: (ha-286000) DBG | Found match: 66:c8:48:4e:12:1b
	I0816 10:01:59.487797    3758 main.go:141] libmachine: (ha-286000) DBG | IP: 192.169.0.5
	I0816 10:01:59.487831    3758 main.go:141] libmachine: (ha-286000) Calling .GetConfigRaw
	I0816 10:01:59.488860    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:01:59.489069    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:01:59.489276    3758 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0816 10:01:59.489289    3758 main.go:141] libmachine: (ha-286000) Calling .GetState
	I0816 10:01:59.489463    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:59.489567    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:01:59.490615    3758 main.go:141] libmachine: Detecting operating system of created instance...
	I0816 10:01:59.490628    3758 main.go:141] libmachine: Waiting for SSH to be available...
	I0816 10:01:59.490644    3758 main.go:141] libmachine: Getting to WaitForSSH function...
	I0816 10:01:59.490652    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:01:59.490782    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:01:59.490923    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:01:59.491055    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:01:59.491190    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:01:59.491362    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:01:59.491595    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:01:59.491605    3758 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0816 10:01:59.511220    3758 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0816 10:02:02.573095    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 10:02:02.573108    3758 main.go:141] libmachine: Detecting the provisioner...
	I0816 10:02:02.573113    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:02.573255    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:02.573385    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:02.573489    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:02.573602    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:02.573753    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:02.573906    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:02:02.573914    3758 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0816 10:02:02.632510    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0816 10:02:02.632561    3758 main.go:141] libmachine: found compatible host: buildroot
	I0816 10:02:02.632568    3758 main.go:141] libmachine: Provisioning with buildroot...
	I0816 10:02:02.632573    3758 main.go:141] libmachine: (ha-286000) Calling .GetMachineName
	I0816 10:02:02.632721    3758 buildroot.go:166] provisioning hostname "ha-286000"
	I0816 10:02:02.632732    3758 main.go:141] libmachine: (ha-286000) Calling .GetMachineName
	I0816 10:02:02.632834    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:02.632956    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:02.633046    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:02.633122    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:02.633236    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:02.633363    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:02.633492    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:02:02.633500    3758 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-286000 && echo "ha-286000" | sudo tee /etc/hostname
	I0816 10:02:02.702336    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-286000
	
	I0816 10:02:02.702354    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:02.702482    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:02.702569    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:02.702670    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:02.702756    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:02.702893    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:02.703040    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:02:02.703052    3758 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-286000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-286000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-286000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 10:02:02.766997    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 10:02:02.767016    3758 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19461-1276/.minikube CaCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19461-1276/.minikube}
	I0816 10:02:02.767026    3758 buildroot.go:174] setting up certificates
	I0816 10:02:02.767038    3758 provision.go:84] configureAuth start
	I0816 10:02:02.767044    3758 main.go:141] libmachine: (ha-286000) Calling .GetMachineName
	I0816 10:02:02.767175    3758 main.go:141] libmachine: (ha-286000) Calling .GetIP
	I0816 10:02:02.767270    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:02.767372    3758 provision.go:143] copyHostCerts
	I0816 10:02:02.767406    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem
	I0816 10:02:02.767476    3758 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem, removing ...
	I0816 10:02:02.767485    3758 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem
	I0816 10:02:02.767630    3758 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem (1679 bytes)
	I0816 10:02:02.767837    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem
	I0816 10:02:02.767868    3758 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem, removing ...
	I0816 10:02:02.767873    3758 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem
	I0816 10:02:02.767991    3758 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem (1078 bytes)
	I0816 10:02:02.768146    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem
	I0816 10:02:02.768179    3758 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem, removing ...
	I0816 10:02:02.768184    3758 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem
	I0816 10:02:02.768268    3758 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem (1123 bytes)
	I0816 10:02:02.768410    3758 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem org=jenkins.ha-286000 san=[127.0.0.1 192.169.0.5 ha-286000 localhost minikube]
	I0816 10:02:02.915225    3758 provision.go:177] copyRemoteCerts
	I0816 10:02:02.915279    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 10:02:02.915299    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:02.915444    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:02.915558    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:02.915640    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:02.915723    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:02:02.953069    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0816 10:02:02.953145    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0816 10:02:02.973516    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0816 10:02:02.973600    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0816 10:02:02.993038    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0816 10:02:02.993100    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 10:02:03.013565    3758 provision.go:87] duration metric: took 246.514857ms to configureAuth
	I0816 10:02:03.013581    3758 buildroot.go:189] setting minikube options for container-runtime
	I0816 10:02:03.013724    3758 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:02:03.013737    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:03.013889    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:03.013978    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:03.014067    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:03.014147    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:03.014250    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:03.014369    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:03.014498    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:02:03.014506    3758 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0816 10:02:03.073645    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0816 10:02:03.073660    3758 buildroot.go:70] root file system type: tmpfs
	I0816 10:02:03.073734    3758 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0816 10:02:03.073745    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:03.073872    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:03.073966    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:03.074063    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:03.074164    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:03.074292    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:03.074429    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:02:03.074479    3758 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0816 10:02:03.148891    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0816 10:02:03.148912    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:03.149052    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:03.149138    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:03.149235    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:03.149324    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:03.149466    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:03.149604    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:02:03.149616    3758 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0816 10:02:04.780498    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0816 10:02:04.780519    3758 main.go:141] libmachine: Checking connection to Docker...
	I0816 10:02:04.780526    3758 main.go:141] libmachine: (ha-286000) Calling .GetURL
	I0816 10:02:04.780691    3758 main.go:141] libmachine: Docker is up and running!
	I0816 10:02:04.780698    3758 main.go:141] libmachine: Reticulating splines...
	I0816 10:02:04.780702    3758 client.go:171] duration metric: took 16.091876944s to LocalClient.Create
	I0816 10:02:04.780713    3758 start.go:167] duration metric: took 16.091916939s to libmachine.API.Create "ha-286000"
	I0816 10:02:04.780722    3758 start.go:293] postStartSetup for "ha-286000" (driver="hyperkit")
	I0816 10:02:04.780729    3758 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 10:02:04.780738    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:04.780873    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 10:02:04.780884    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:04.780956    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:04.781047    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:04.781154    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:04.781275    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:02:04.819650    3758 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 10:02:04.823314    3758 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 10:02:04.823335    3758 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19461-1276/.minikube/addons for local assets ...
	I0816 10:02:04.823443    3758 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19461-1276/.minikube/files for local assets ...
	I0816 10:02:04.823624    3758 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> 18312.pem in /etc/ssl/certs
	I0816 10:02:04.823631    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> /etc/ssl/certs/18312.pem
	I0816 10:02:04.823837    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 10:02:04.831860    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem --> /etc/ssl/certs/18312.pem (1708 bytes)
	I0816 10:02:04.850901    3758 start.go:296] duration metric: took 70.172878ms for postStartSetup
	I0816 10:02:04.850935    3758 main.go:141] libmachine: (ha-286000) Calling .GetConfigRaw
	I0816 10:02:04.851561    3758 main.go:141] libmachine: (ha-286000) Calling .GetIP
	I0816 10:02:04.851702    3758 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:02:04.852053    3758 start.go:128] duration metric: took 16.196888252s to createHost
	I0816 10:02:04.852071    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:04.852167    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:04.852261    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:04.852344    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:04.852420    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:04.852525    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:04.852648    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:02:04.852655    3758 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 10:02:04.911299    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723827724.986438448
	
	I0816 10:02:04.911317    3758 fix.go:216] guest clock: 1723827724.986438448
	I0816 10:02:04.911322    3758 fix.go:229] Guest: 2024-08-16 10:02:04.986438448 -0700 PDT Remote: 2024-08-16 10:02:04.852061 -0700 PDT m=+16.776125584 (delta=134.377448ms)
	I0816 10:02:04.911341    3758 fix.go:200] guest clock delta is within tolerance: 134.377448ms
	I0816 10:02:04.911345    3758 start.go:83] releasing machines lock for "ha-286000", held for 16.256325696s
	I0816 10:02:04.911364    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:04.911498    3758 main.go:141] libmachine: (ha-286000) Calling .GetIP
	I0816 10:02:04.911592    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:04.911942    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:04.912068    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:04.912144    3758 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 10:02:04.912171    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:04.912218    3758 ssh_runner.go:195] Run: cat /version.json
	I0816 10:02:04.912231    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:04.912262    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:04.912305    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:04.912360    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:04.912384    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:04.912437    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:04.912464    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:04.912547    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:02:04.912567    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:02:04.945688    3758 ssh_runner.go:195] Run: systemctl --version
	I0816 10:02:04.997158    3758 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 10:02:05.002278    3758 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 10:02:05.002315    3758 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 10:02:05.015089    3758 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 10:02:05.015098    3758 start.go:495] detecting cgroup driver to use...
	I0816 10:02:05.015195    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 10:02:05.030338    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0816 10:02:05.038588    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0816 10:02:05.046898    3758 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0816 10:02:05.046932    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0816 10:02:05.055211    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 10:02:05.063533    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0816 10:02:05.073244    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 10:02:05.081895    3758 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 10:02:05.090915    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0816 10:02:05.099773    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0816 10:02:05.108719    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0816 10:02:05.117772    3758 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 10:02:05.125574    3758 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 10:02:05.145403    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:05.261568    3758 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0816 10:02:05.278920    3758 start.go:495] detecting cgroup driver to use...
	I0816 10:02:05.278996    3758 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0816 10:02:05.293925    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 10:02:05.306704    3758 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 10:02:05.320000    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 10:02:05.330230    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 10:02:05.340255    3758 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0816 10:02:05.369098    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 10:02:05.379374    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 10:02:05.395120    3758 ssh_runner.go:195] Run: which cri-dockerd
	I0816 10:02:05.397921    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0816 10:02:05.404866    3758 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0816 10:02:05.417935    3758 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0816 10:02:05.514793    3758 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0816 10:02:05.626208    3758 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0816 10:02:05.626292    3758 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0816 10:02:05.640317    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:05.737978    3758 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0816 10:02:08.105430    3758 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.367458496s)
	I0816 10:02:08.105491    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0816 10:02:08.115970    3758 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0816 10:02:08.129325    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0816 10:02:08.140312    3758 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0816 10:02:08.237628    3758 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0816 10:02:08.340285    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:08.436168    3758 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0816 10:02:08.457808    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0816 10:02:08.468314    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:08.561830    3758 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0816 10:02:08.620080    3758 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0816 10:02:08.620164    3758 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0816 10:02:08.624716    3758 start.go:563] Will wait 60s for crictl version
	I0816 10:02:08.624764    3758 ssh_runner.go:195] Run: which crictl
	I0816 10:02:08.627806    3758 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 10:02:08.660437    3758 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0816 10:02:08.660505    3758 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0816 10:02:08.678073    3758 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0816 10:02:08.722165    3758 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0816 10:02:08.722217    3758 main.go:141] libmachine: (ha-286000) Calling .GetIP
	I0816 10:02:08.722605    3758 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0816 10:02:08.727140    3758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 10:02:08.736769    3758 kubeadm.go:883] updating cluster {Name:ha-286000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:ha-286000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 10:02:08.736830    3758 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 10:02:08.736887    3758 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0816 10:02:08.748223    3758 docker.go:685] Got preloaded images: 
	I0816 10:02:08.748236    3758 docker.go:691] registry.k8s.io/kube-apiserver:v1.31.0 wasn't preloaded
	I0816 10:02:08.748294    3758 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0816 10:02:08.755601    3758 ssh_runner.go:195] Run: which lz4
	I0816 10:02:08.758510    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0816 10:02:08.758630    3758 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 10:02:08.761594    3758 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 10:02:08.761610    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (342554258 bytes)
	I0816 10:02:09.771496    3758 docker.go:649] duration metric: took 1.012922149s to copy over tarball
	I0816 10:02:09.771559    3758 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 10:02:12.050040    3758 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.27849665s)
	I0816 10:02:12.050054    3758 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 10:02:12.075438    3758 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0816 10:02:12.083331    3758 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0816 10:02:12.097261    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:12.204348    3758 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0816 10:02:14.516272    3758 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.311936705s)
	I0816 10:02:14.516363    3758 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0816 10:02:14.529500    3758 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0816 10:02:14.529519    3758 cache_images.go:84] Images are preloaded, skipping loading
	I0816 10:02:14.529530    3758 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.0 docker true true} ...
	I0816 10:02:14.529607    3758 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-286000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-286000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 10:02:14.529674    3758 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0816 10:02:14.568093    3758 cni.go:84] Creating CNI manager for ""
	I0816 10:02:14.568107    3758 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0816 10:02:14.568120    3758 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 10:02:14.568134    3758 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-286000 NodeName:ha-286000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 10:02:14.568222    3758 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-286000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 10:02:14.568237    3758 kube-vip.go:115] generating kube-vip config ...
	I0816 10:02:14.568286    3758 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0816 10:02:14.580706    3758 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0816 10:02:14.580778    3758 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0816 10:02:14.580832    3758 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 10:02:14.588124    3758 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 10:02:14.588174    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0816 10:02:14.595283    3758 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0816 10:02:14.608721    3758 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 10:02:14.622036    3758 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0816 10:02:14.635525    3758 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1446 bytes)
	I0816 10:02:14.648868    3758 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0816 10:02:14.651748    3758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 10:02:14.661305    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:14.752561    3758 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 10:02:14.768967    3758 certs.go:68] Setting up /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000 for IP: 192.169.0.5
	I0816 10:02:14.768980    3758 certs.go:194] generating shared ca certs ...
	I0816 10:02:14.768991    3758 certs.go:226] acquiring lock for ca certs: {Name:mka549fbc467849834d2267da204028c49d88a3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:14.769183    3758 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key
	I0816 10:02:14.769258    3758 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key
	I0816 10:02:14.769269    3758 certs.go:256] generating profile certs ...
	I0816 10:02:14.769315    3758 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.key
	I0816 10:02:14.769328    3758 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.crt with IP's: []
	I0816 10:02:14.849573    3758 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.crt ...
	I0816 10:02:14.849589    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.crt: {Name:mk9c6a2b5871e8d33f12e0a58268b028ea37ab4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:14.849911    3758 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.key ...
	I0816 10:02:14.849919    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.key: {Name:mk7d8ed264e583d7ced6b16b60c72c11b15a1f22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:14.850137    3758 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.af99fd6a
	I0816 10:02:14.850151    3758 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.af99fd6a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.254]
	I0816 10:02:14.968129    3758 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.af99fd6a ...
	I0816 10:02:14.968143    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.af99fd6a: {Name:mk3b8945a16852dca8274ec1ab3a9f2eade80831 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:14.968464    3758 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.af99fd6a ...
	I0816 10:02:14.968473    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.af99fd6a: {Name:mk90fcce3348a214c4733fe5a1fb806faff7773f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:14.968717    3758 certs.go:381] copying /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.af99fd6a -> /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt
	I0816 10:02:14.968940    3758 certs.go:385] copying /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.af99fd6a -> /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key
	I0816 10:02:14.969171    3758 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key
	I0816 10:02:14.969185    3758 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.crt with IP's: []
	I0816 10:02:15.029329    3758 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.crt ...
	I0816 10:02:15.029338    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.crt: {Name:mk70aaa3903fb58df2124d779dbea07aa95769f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:15.029604    3758 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key ...
	I0816 10:02:15.029611    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key: {Name:mka3a85354927e8da31f9dfc67f92dea3c77ca65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:15.029850    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0816 10:02:15.029876    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0816 10:02:15.029898    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0816 10:02:15.029940    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0816 10:02:15.029958    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0816 10:02:15.029990    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0816 10:02:15.030008    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0816 10:02:15.030025    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0816 10:02:15.030118    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem (1338 bytes)
	W0816 10:02:15.030163    3758 certs.go:480] ignoring /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831_empty.pem, impossibly tiny 0 bytes
	I0816 10:02:15.030171    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 10:02:15.030240    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem (1078 bytes)
	I0816 10:02:15.030268    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem (1123 bytes)
	I0816 10:02:15.030354    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem (1679 bytes)
	I0816 10:02:15.030467    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem (1708 bytes)
	I0816 10:02:15.030499    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:02:15.030525    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem -> /usr/share/ca-certificates/1831.pem
	I0816 10:02:15.030542    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> /usr/share/ca-certificates/18312.pem
	I0816 10:02:15.031030    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 10:02:15.051618    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0816 10:02:15.071318    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 10:02:15.091017    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 10:02:15.110497    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0816 10:02:15.131033    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 10:02:15.150747    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 10:02:15.170186    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 10:02:15.189808    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 10:02:15.209868    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem --> /usr/share/ca-certificates/1831.pem (1338 bytes)
	I0816 10:02:15.229378    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem --> /usr/share/ca-certificates/18312.pem (1708 bytes)
	I0816 10:02:15.248667    3758 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 10:02:15.262035    3758 ssh_runner.go:195] Run: openssl version
	I0816 10:02:15.266135    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 10:02:15.275959    3758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:02:15.279367    3758 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 16:48 /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:02:15.279403    3758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:02:15.283611    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 10:02:15.291872    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1831.pem && ln -fs /usr/share/ca-certificates/1831.pem /etc/ssl/certs/1831.pem"
	I0816 10:02:15.299890    3758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1831.pem
	I0816 10:02:15.303179    3758 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 16:57 /usr/share/ca-certificates/1831.pem
	I0816 10:02:15.303212    3758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1831.pem
	I0816 10:02:15.307423    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1831.pem /etc/ssl/certs/51391683.0"
	I0816 10:02:15.315741    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18312.pem && ln -fs /usr/share/ca-certificates/18312.pem /etc/ssl/certs/18312.pem"
	I0816 10:02:15.324088    3758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18312.pem
	I0816 10:02:15.327441    3758 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 16:57 /usr/share/ca-certificates/18312.pem
	I0816 10:02:15.327479    3758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18312.pem
	I0816 10:02:15.331723    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18312.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 10:02:15.339921    3758 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 10:02:15.343061    3758 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0816 10:02:15.343109    3758 kubeadm.go:392] StartCluster: {Name:ha-286000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:ha-286000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 10:02:15.343194    3758 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0816 10:02:15.356016    3758 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 10:02:15.363405    3758 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 10:02:15.370635    3758 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 10:02:15.377806    3758 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 10:02:15.377819    3758 kubeadm.go:157] found existing configuration files:
	
	I0816 10:02:15.377855    3758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 10:02:15.384868    3758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 10:02:15.384903    3758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 10:02:15.392001    3758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 10:02:15.398935    3758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 10:02:15.398971    3758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 10:02:15.406181    3758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 10:02:15.413108    3758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 10:02:15.413142    3758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 10:02:15.422603    3758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 10:02:15.435692    3758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 10:02:15.435749    3758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 10:02:15.450920    3758 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 10:02:15.521417    3758 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0816 10:02:15.521501    3758 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 10:02:15.590843    3758 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 10:02:15.590923    3758 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 10:02:15.591008    3758 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0816 10:02:15.600874    3758 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 10:02:15.632377    3758 out.go:235]   - Generating certificates and keys ...
	I0816 10:02:15.632427    3758 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 10:02:15.632475    3758 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 10:02:15.791885    3758 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0816 10:02:15.852803    3758 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0816 10:02:15.961982    3758 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0816 10:02:16.146557    3758 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0816 10:02:16.493401    3758 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0816 10:02:16.493556    3758 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-286000 localhost] and IPs [192.169.0.5 127.0.0.1 ::1]
	I0816 10:02:16.704832    3758 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0816 10:02:16.705108    3758 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-286000 localhost] and IPs [192.169.0.5 127.0.0.1 ::1]
	I0816 10:02:16.922818    3758 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0816 10:02:17.082810    3758 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0816 10:02:17.393939    3758 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0816 10:02:17.394070    3758 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 10:02:17.465159    3758 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 10:02:17.731984    3758 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0816 10:02:18.019833    3758 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 10:02:18.164168    3758 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 10:02:18.273756    3758 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 10:02:18.274215    3758 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 10:02:18.275949    3758 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 10:02:18.300490    3758 out.go:235]   - Booting up control plane ...
	I0816 10:02:18.300569    3758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 10:02:18.300638    3758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 10:02:18.300693    3758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 10:02:18.300780    3758 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 10:02:18.300850    3758 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 10:02:18.300879    3758 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 10:02:18.405245    3758 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0816 10:02:18.405330    3758 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0816 10:02:18.909805    3758 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 504.760374ms
	I0816 10:02:18.909928    3758 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0816 10:02:24.844637    3758 kubeadm.go:310] [api-check] The API server is healthy after 5.938882642s
	I0816 10:02:24.854864    3758 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0816 10:02:24.862134    3758 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0816 10:02:24.890940    3758 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0816 10:02:24.891101    3758 kubeadm.go:310] [mark-control-plane] Marking the node ha-286000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0816 10:02:24.901441    3758 kubeadm.go:310] [bootstrap-token] Using token: 73merd.1elxantqs1p5mkiz
	I0816 10:02:24.939545    3758 out.go:235]   - Configuring RBAC rules ...
	I0816 10:02:24.939737    3758 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0816 10:02:24.944670    3758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0816 10:02:24.951393    3758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0816 10:02:24.953383    3758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0816 10:02:24.955899    3758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0816 10:02:24.958387    3758 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0816 10:02:25.251799    3758 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0816 10:02:25.676108    3758 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0816 10:02:26.249233    3758 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0816 10:02:26.249936    3758 kubeadm.go:310] 
	I0816 10:02:26.250001    3758 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0816 10:02:26.250014    3758 kubeadm.go:310] 
	I0816 10:02:26.250090    3758 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0816 10:02:26.250102    3758 kubeadm.go:310] 
	I0816 10:02:26.250122    3758 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0816 10:02:26.250175    3758 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0816 10:02:26.250221    3758 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0816 10:02:26.250229    3758 kubeadm.go:310] 
	I0816 10:02:26.250268    3758 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0816 10:02:26.250272    3758 kubeadm.go:310] 
	I0816 10:02:26.250305    3758 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0816 10:02:26.250313    3758 kubeadm.go:310] 
	I0816 10:02:26.250359    3758 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0816 10:02:26.250425    3758 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0816 10:02:26.250484    3758 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0816 10:02:26.250491    3758 kubeadm.go:310] 
	I0816 10:02:26.250569    3758 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0816 10:02:26.250648    3758 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0816 10:02:26.250656    3758 kubeadm.go:310] 
	I0816 10:02:26.250716    3758 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 73merd.1elxantqs1p5mkiz \
	I0816 10:02:26.250793    3758 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:995590413ea712c4f7db2cf849d28264ebc291e6cd53d741ce1d22c1daaa1602 \
	I0816 10:02:26.250811    3758 kubeadm.go:310] 	--control-plane 
	I0816 10:02:26.250817    3758 kubeadm.go:310] 
	I0816 10:02:26.250879    3758 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0816 10:02:26.250885    3758 kubeadm.go:310] 
	I0816 10:02:26.250947    3758 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 73merd.1elxantqs1p5mkiz \
	I0816 10:02:26.251036    3758 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:995590413ea712c4f7db2cf849d28264ebc291e6cd53d741ce1d22c1daaa1602 
	I0816 10:02:26.251297    3758 kubeadm.go:310] W0816 17:02:15.599099    1566 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 10:02:26.251521    3758 kubeadm.go:310] W0816 17:02:15.600578    1566 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 10:02:26.251608    3758 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 10:02:26.251620    3758 cni.go:84] Creating CNI manager for ""
	I0816 10:02:26.251624    3758 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0816 10:02:26.289324    3758 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0816 10:02:26.363318    3758 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0816 10:02:26.368971    3758 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0816 10:02:26.368983    3758 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0816 10:02:26.382855    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0816 10:02:26.629869    3758 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 10:02:26.629947    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-286000 minikube.k8s.io/updated_at=2024_08_16T10_02_26_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd minikube.k8s.io/name=ha-286000 minikube.k8s.io/primary=true
	I0816 10:02:26.629949    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 10:02:26.658712    3758 ops.go:34] apiserver oom_adj: -16
	I0816 10:02:26.756402    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 10:02:27.257667    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 10:02:27.757259    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 10:02:28.256864    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 10:02:28.757602    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 10:02:29.256802    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 10:02:29.757282    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 10:02:29.882342    3758 kubeadm.go:1113] duration metric: took 3.252511045s to wait for elevateKubeSystemPrivileges
	I0816 10:02:29.882365    3758 kubeadm.go:394] duration metric: took 14.539506386s to StartCluster
	I0816 10:02:29.882380    3758 settings.go:142] acquiring lock: {Name:mk60e377244eac756871b74b6bd45115654652a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:29.882471    3758 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19461-1276/kubeconfig
	I0816 10:02:29.882922    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/kubeconfig: {Name:mk7f4491c8ec5cf96c96a5a1a5dbfba4301394a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:29.883167    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0816 10:02:29.883170    3758 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:02:29.883182    3758 start.go:241] waiting for startup goroutines ...
	I0816 10:02:29.883204    3758 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 10:02:29.883238    3758 addons.go:69] Setting storage-provisioner=true in profile "ha-286000"
	I0816 10:02:29.883244    3758 addons.go:69] Setting default-storageclass=true in profile "ha-286000"
	I0816 10:02:29.883261    3758 addons.go:234] Setting addon storage-provisioner=true in "ha-286000"
	I0816 10:02:29.883278    3758 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-286000"
	I0816 10:02:29.883280    3758 host.go:66] Checking if "ha-286000" exists ...
	I0816 10:02:29.883305    3758 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:02:29.883558    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:02:29.883573    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:02:29.883575    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:02:29.883598    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:02:29.892969    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51028
	I0816 10:02:29.893238    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51030
	I0816 10:02:29.893356    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:02:29.893605    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:02:29.893730    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:02:29.893741    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:02:29.893938    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:02:29.893951    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:02:29.893976    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:02:29.894142    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:02:29.894254    3758 main.go:141] libmachine: (ha-286000) Calling .GetState
	I0816 10:02:29.894338    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:29.894365    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:02:29.894397    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:02:29.894424    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:02:29.896690    3758 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19461-1276/kubeconfig
	I0816 10:02:29.896968    3758 kapi.go:59] client config for ha-286000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.key", CAFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xa202f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0816 10:02:29.897373    3758 cert_rotation.go:140] Starting client certificate rotation controller
	I0816 10:02:29.897530    3758 addons.go:234] Setting addon default-storageclass=true in "ha-286000"
	I0816 10:02:29.897550    3758 host.go:66] Checking if "ha-286000" exists ...
	I0816 10:02:29.897771    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:02:29.897789    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:02:29.903669    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51032
	I0816 10:02:29.904077    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:02:29.904431    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:02:29.904451    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:02:29.904736    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:02:29.904889    3758 main.go:141] libmachine: (ha-286000) Calling .GetState
	I0816 10:02:29.904993    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:29.905054    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:02:29.906086    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:29.906730    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51034
	I0816 10:02:29.907081    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:02:29.907463    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:02:29.907491    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:02:29.907694    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:02:29.908045    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:02:29.908061    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:02:29.917300    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51036
	I0816 10:02:29.917667    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:02:29.918020    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:02:29.918034    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:02:29.918277    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:02:29.918384    3758 main.go:141] libmachine: (ha-286000) Calling .GetState
	I0816 10:02:29.918483    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:29.918572    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:02:29.919534    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:29.919671    3758 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 10:02:29.919680    3758 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 10:02:29.919688    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:29.919789    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:29.919896    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:29.919990    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:29.920072    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:02:29.928735    3758 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 10:02:29.965711    3758 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 10:02:29.965725    3758 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 10:02:29.965744    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:29.965926    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:29.966043    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:29.966151    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:29.966280    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:02:30.033245    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.169.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0816 10:02:30.037035    3758 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 10:02:30.090040    3758 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 10:02:30.396267    3758 start.go:971] {"host.minikube.internal": 192.169.0.1} host record injected into CoreDNS's ConfigMap
	I0816 10:02:30.396311    3758 main.go:141] libmachine: Making call to close driver server
	I0816 10:02:30.396323    3758 main.go:141] libmachine: (ha-286000) Calling .Close
	I0816 10:02:30.396482    3758 main.go:141] libmachine: Successfully made call to close driver server
	I0816 10:02:30.396488    3758 main.go:141] libmachine: (ha-286000) DBG | Closing plugin on server side
	I0816 10:02:30.396492    3758 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 10:02:30.396501    3758 main.go:141] libmachine: Making call to close driver server
	I0816 10:02:30.396507    3758 main.go:141] libmachine: (ha-286000) Calling .Close
	I0816 10:02:30.396655    3758 main.go:141] libmachine: Successfully made call to close driver server
	I0816 10:02:30.396662    3758 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 10:02:30.396713    3758 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0816 10:02:30.396725    3758 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0816 10:02:30.396804    3758 round_trippers.go:463] GET https://192.169.0.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0816 10:02:30.396810    3758 round_trippers.go:469] Request Headers:
	I0816 10:02:30.396817    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:02:30.396822    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:02:30.402286    3758 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0816 10:02:30.402706    3758 round_trippers.go:463] PUT https://192.169.0.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0816 10:02:30.402713    3758 round_trippers.go:469] Request Headers:
	I0816 10:02:30.402718    3758 round_trippers.go:473]     Content-Type: application/json
	I0816 10:02:30.402721    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:02:30.402723    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:02:30.404290    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:02:30.404403    3758 main.go:141] libmachine: Making call to close driver server
	I0816 10:02:30.404416    3758 main.go:141] libmachine: (ha-286000) Calling .Close
	I0816 10:02:30.404565    3758 main.go:141] libmachine: Successfully made call to close driver server
	I0816 10:02:30.404575    3758 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 10:02:30.404586    3758 main.go:141] libmachine: (ha-286000) DBG | Closing plugin on server side
	I0816 10:02:30.497806    3758 main.go:141] libmachine: Making call to close driver server
	I0816 10:02:30.497818    3758 main.go:141] libmachine: (ha-286000) Calling .Close
	I0816 10:02:30.498006    3758 main.go:141] libmachine: Successfully made call to close driver server
	I0816 10:02:30.498011    3758 main.go:141] libmachine: (ha-286000) DBG | Closing plugin on server side
	I0816 10:02:30.498016    3758 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 10:02:30.498023    3758 main.go:141] libmachine: Making call to close driver server
	I0816 10:02:30.498040    3758 main.go:141] libmachine: (ha-286000) Calling .Close
	I0816 10:02:30.498160    3758 main.go:141] libmachine: Successfully made call to close driver server
	I0816 10:02:30.498168    3758 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 10:02:30.498179    3758 main.go:141] libmachine: (ha-286000) DBG | Closing plugin on server side
	I0816 10:02:30.538607    3758 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0816 10:02:30.596522    3758 addons.go:510] duration metric: took 713.336883ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0816 10:02:30.596578    3758 start.go:246] waiting for cluster config update ...
	I0816 10:02:30.596599    3758 start.go:255] writing updated cluster config ...
	I0816 10:02:30.634683    3758 out.go:201] 
	I0816 10:02:30.655754    3758 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:02:30.655861    3758 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:02:30.677634    3758 out.go:177] * Starting "ha-286000-m02" control-plane node in "ha-286000" cluster
	I0816 10:02:30.737443    3758 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 10:02:30.737475    3758 cache.go:56] Caching tarball of preloaded images
	I0816 10:02:30.737660    3758 preload.go:172] Found /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0816 10:02:30.737679    3758 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 10:02:30.737771    3758 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:02:30.738414    3758 start.go:360] acquireMachinesLock for ha-286000-m02: {Name:mk0510f0432b1e24fd07d899155e85dff8756d5a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:02:30.738494    3758 start.go:364] duration metric: took 63.585µs to acquireMachinesLock for "ha-286000-m02"
	I0816 10:02:30.738517    3758 start.go:93] Provisioning new machine with config: &{Name:ha-286000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:ha-286000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:02:30.738590    3758 start.go:125] createHost starting for "m02" (driver="hyperkit")
	I0816 10:02:30.760743    3758 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0816 10:02:30.760905    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:02:30.760935    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:02:30.771028    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51041
	I0816 10:02:30.771383    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:02:30.771745    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:02:30.771760    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:02:30.771967    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:02:30.772077    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetMachineName
	I0816 10:02:30.772157    3758 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:02:30.772261    3758 start.go:159] libmachine.API.Create for "ha-286000" (driver="hyperkit")
	I0816 10:02:30.772276    3758 client.go:168] LocalClient.Create starting
	I0816 10:02:30.772303    3758 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem
	I0816 10:02:30.772358    3758 main.go:141] libmachine: Decoding PEM data...
	I0816 10:02:30.772368    3758 main.go:141] libmachine: Parsing certificate...
	I0816 10:02:30.772413    3758 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem
	I0816 10:02:30.772450    3758 main.go:141] libmachine: Decoding PEM data...
	I0816 10:02:30.772460    3758 main.go:141] libmachine: Parsing certificate...
	I0816 10:02:30.772472    3758 main.go:141] libmachine: Running pre-create checks...
	I0816 10:02:30.772477    3758 main.go:141] libmachine: (ha-286000-m02) Calling .PreCreateCheck
	I0816 10:02:30.772545    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:30.772568    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetConfigRaw
	I0816 10:02:30.781772    3758 main.go:141] libmachine: Creating machine...
	I0816 10:02:30.781789    3758 main.go:141] libmachine: (ha-286000-m02) Calling .Create
	I0816 10:02:30.781943    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:30.782181    3758 main.go:141] libmachine: (ha-286000-m02) DBG | I0816 10:02:30.781934    3805 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19461-1276/.minikube
	I0816 10:02:30.782269    3758 main.go:141] libmachine: (ha-286000-m02) Downloading /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19461-1276/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0816 10:02:30.982082    3758 main.go:141] libmachine: (ha-286000-m02) DBG | I0816 10:02:30.982020    3805 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/id_rsa...
	I0816 10:02:31.266609    3758 main.go:141] libmachine: (ha-286000-m02) DBG | I0816 10:02:31.266514    3805 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/ha-286000-m02.rawdisk...
	I0816 10:02:31.266629    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Writing magic tar header
	I0816 10:02:31.266638    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Writing SSH key tar header
	I0816 10:02:31.267382    3758 main.go:141] libmachine: (ha-286000-m02) DBG | I0816 10:02:31.267242    3805 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02 ...
	I0816 10:02:31.642864    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:31.642889    3758 main.go:141] libmachine: (ha-286000-m02) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/hyperkit.pid
	I0816 10:02:31.642912    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Using UUID f7630301-2bc3-4935-a751-ac72999da031
	I0816 10:02:31.669349    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Generated MAC 72:69:8f:11:68:1d
	I0816 10:02:31.669369    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000
	I0816 10:02:31.669397    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"f7630301-2bc3-4935-a751-ac72999da031", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 10:02:31.669426    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"f7630301-2bc3-4935-a751-ac72999da031", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 10:02:31.669455    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "f7630301-2bc3-4935-a751-ac72999da031", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/ha-286000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/bzimage,/Users/jenkins/minikube-integration/19461-1276/.minikube/machine
s/ha-286000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000"}
	I0816 10:02:31.669484    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U f7630301-2bc3-4935-a751-ac72999da031 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/ha-286000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/bzimage,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000"
	I0816 10:02:31.669499    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0816 10:02:31.672492    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 DEBUG: hyperkit: Pid is 3806
	I0816 10:02:31.672957    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Attempt 0
	I0816 10:02:31.673000    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:31.673054    3758 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid from json: 3806
	I0816 10:02:31.674014    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Searching for 72:69:8f:11:68:1d in /var/db/dhcpd_leases ...
	I0816 10:02:31.674086    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0816 10:02:31.674098    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:02:31.674120    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:02:31.674129    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:02:31.674137    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:02:31.680025    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0816 10:02:31.688109    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0816 10:02:31.688968    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:02:31.688993    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:02:31.689006    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:02:31.689016    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:02:32.077133    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:32 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0816 10:02:32.077153    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:32 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0816 10:02:32.191789    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:02:32.191806    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:02:32.191814    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:02:32.191825    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:02:32.192676    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:32 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0816 10:02:32.192686    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:32 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0816 10:02:33.675165    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Attempt 1
	I0816 10:02:33.675183    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:33.675297    3758 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid from json: 3806
	I0816 10:02:33.676089    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Searching for 72:69:8f:11:68:1d in /var/db/dhcpd_leases ...
	I0816 10:02:33.676141    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0816 10:02:33.676153    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:02:33.676162    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:02:33.676169    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:02:33.676193    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:02:35.676266    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Attempt 2
	I0816 10:02:35.676285    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:35.676360    3758 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid from json: 3806
	I0816 10:02:35.677182    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Searching for 72:69:8f:11:68:1d in /var/db/dhcpd_leases ...
	I0816 10:02:35.677237    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0816 10:02:35.677248    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:02:35.677262    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:02:35.677270    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:02:35.677281    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:02:37.678515    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Attempt 3
	I0816 10:02:37.678532    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:37.678572    3758 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid from json: 3806
	I0816 10:02:37.679405    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Searching for 72:69:8f:11:68:1d in /var/db/dhcpd_leases ...
	I0816 10:02:37.679441    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0816 10:02:37.679451    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:02:37.679461    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:02:37.679468    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:02:37.679483    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:02:37.782496    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:37 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0816 10:02:37.782531    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:37 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0816 10:02:37.782540    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:37 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0816 10:02:37.806630    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:37 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0816 10:02:39.680161    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Attempt 4
	I0816 10:02:39.680178    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:39.680273    3758 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid from json: 3806
	I0816 10:02:39.681064    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Searching for 72:69:8f:11:68:1d in /var/db/dhcpd_leases ...
	I0816 10:02:39.681112    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0816 10:02:39.681136    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:02:39.681155    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:02:39.681170    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:02:39.681181    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:02:41.681380    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Attempt 5
	I0816 10:02:41.681414    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:41.681563    3758 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid from json: 3806
	I0816 10:02:41.682343    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Searching for 72:69:8f:11:68:1d in /var/db/dhcpd_leases ...
	I0816 10:02:41.682392    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0816 10:02:41.682406    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0d7b0}
	I0816 10:02:41.682415    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Found match: 72:69:8f:11:68:1d
	I0816 10:02:41.682421    3758 main.go:141] libmachine: (ha-286000-m02) DBG | IP: 192.169.0.6
	I0816 10:02:41.682475    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetConfigRaw
	I0816 10:02:41.683135    3758 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:02:41.683257    3758 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:02:41.683358    3758 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0816 10:02:41.683367    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetState
	I0816 10:02:41.683478    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:41.683537    3758 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid from json: 3806
	I0816 10:02:41.684414    3758 main.go:141] libmachine: Detecting operating system of created instance...
	I0816 10:02:41.684423    3758 main.go:141] libmachine: Waiting for SSH to be available...
	I0816 10:02:41.684427    3758 main.go:141] libmachine: Getting to WaitForSSH function...
	I0816 10:02:41.684431    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:41.684566    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:41.684692    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:41.684809    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:41.684943    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:41.685095    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:41.685300    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:02:41.685308    3758 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0816 10:02:42.746559    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 10:02:42.746571    3758 main.go:141] libmachine: Detecting the provisioner...
	I0816 10:02:42.746577    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:42.746714    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:42.746824    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:42.746914    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:42.746988    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:42.747112    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:42.747269    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:02:42.747277    3758 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0816 10:02:42.811630    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0816 10:02:42.811666    3758 main.go:141] libmachine: found compatible host: buildroot
	I0816 10:02:42.811672    3758 main.go:141] libmachine: Provisioning with buildroot...
	I0816 10:02:42.811681    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetMachineName
	I0816 10:02:42.811816    3758 buildroot.go:166] provisioning hostname "ha-286000-m02"
	I0816 10:02:42.811828    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetMachineName
	I0816 10:02:42.811943    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:42.812045    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:42.812136    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:42.812274    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:42.812376    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:42.812513    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:42.812673    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:02:42.812682    3758 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-286000-m02 && echo "ha-286000-m02" | sudo tee /etc/hostname
	I0816 10:02:42.887531    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-286000-m02
	
	I0816 10:02:42.887545    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:42.887676    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:42.887769    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:42.887867    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:42.887953    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:42.888075    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:42.888214    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:02:42.888225    3758 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-286000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-286000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-286000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 10:02:42.959625    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 10:02:42.959641    3758 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19461-1276/.minikube CaCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19461-1276/.minikube}
	I0816 10:02:42.959649    3758 buildroot.go:174] setting up certificates
	I0816 10:02:42.959661    3758 provision.go:84] configureAuth start
	I0816 10:02:42.959666    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetMachineName
	I0816 10:02:42.959803    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetIP
	I0816 10:02:42.959899    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:42.959980    3758 provision.go:143] copyHostCerts
	I0816 10:02:42.960007    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem
	I0816 10:02:42.960055    3758 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem, removing ...
	I0816 10:02:42.960061    3758 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem
	I0816 10:02:42.960954    3758 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem (1078 bytes)
	I0816 10:02:42.961160    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem
	I0816 10:02:42.961190    3758 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem, removing ...
	I0816 10:02:42.961195    3758 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem
	I0816 10:02:42.961273    3758 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem (1123 bytes)
	I0816 10:02:42.961439    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem
	I0816 10:02:42.961509    3758 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem, removing ...
	I0816 10:02:42.961515    3758 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem
	I0816 10:02:42.961594    3758 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem (1679 bytes)
	I0816 10:02:42.961746    3758 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem org=jenkins.ha-286000-m02 san=[127.0.0.1 192.169.0.6 ha-286000-m02 localhost minikube]
	I0816 10:02:43.093511    3758 provision.go:177] copyRemoteCerts
	I0816 10:02:43.093556    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 10:02:43.093572    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:43.093719    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:43.093818    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:43.093917    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:43.094005    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/id_rsa Username:docker}
	I0816 10:02:43.132229    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0816 10:02:43.132299    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0816 10:02:43.151814    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0816 10:02:43.151873    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0816 10:02:43.171464    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0816 10:02:43.171532    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 10:02:43.191045    3758 provision.go:87] duration metric: took 231.381757ms to configureAuth
	I0816 10:02:43.191057    3758 buildroot.go:189] setting minikube options for container-runtime
	I0816 10:02:43.191187    3758 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:02:43.191200    3758 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:02:43.191321    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:43.191410    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:43.191497    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:43.191580    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:43.191658    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:43.191759    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:43.191891    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:02:43.191898    3758 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0816 10:02:43.256733    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0816 10:02:43.256744    3758 buildroot.go:70] root file system type: tmpfs
	I0816 10:02:43.256823    3758 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0816 10:02:43.256835    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:43.256961    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:43.257048    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:43.257142    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:43.257220    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:43.257364    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:43.257505    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:02:43.257553    3758 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0816 10:02:43.330709    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0816 10:02:43.330729    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:43.330874    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:43.330977    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:43.331073    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:43.331167    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:43.331293    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:43.331440    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:02:43.331451    3758 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0816 10:02:44.884634    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0816 10:02:44.884648    3758 main.go:141] libmachine: Checking connection to Docker...
	I0816 10:02:44.884655    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetURL
	I0816 10:02:44.884793    3758 main.go:141] libmachine: Docker is up and running!
	I0816 10:02:44.884799    3758 main.go:141] libmachine: Reticulating splines...
	I0816 10:02:44.884804    3758 client.go:171] duration metric: took 14.112778669s to LocalClient.Create
	I0816 10:02:44.884820    3758 start.go:167] duration metric: took 14.112810851s to libmachine.API.Create "ha-286000"
	I0816 10:02:44.884826    3758 start.go:293] postStartSetup for "ha-286000-m02" (driver="hyperkit")
	I0816 10:02:44.884832    3758 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 10:02:44.884843    3758 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:02:44.884991    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 10:02:44.885004    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:44.885101    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:44.885204    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:44.885335    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:44.885428    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/id_rsa Username:docker}
	I0816 10:02:44.925701    3758 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 10:02:44.929599    3758 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 10:02:44.929613    3758 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19461-1276/.minikube/addons for local assets ...
	I0816 10:02:44.929722    3758 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19461-1276/.minikube/files for local assets ...
	I0816 10:02:44.929908    3758 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> 18312.pem in /etc/ssl/certs
	I0816 10:02:44.929914    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> /etc/ssl/certs/18312.pem
	I0816 10:02:44.930119    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 10:02:44.940484    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem --> /etc/ssl/certs/18312.pem (1708 bytes)
	I0816 10:02:44.973200    3758 start.go:296] duration metric: took 88.36731ms for postStartSetup
	I0816 10:02:44.973230    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetConfigRaw
	I0816 10:02:44.973848    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetIP
	I0816 10:02:44.973996    3758 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:02:44.974351    3758 start.go:128] duration metric: took 14.23599979s to createHost
	I0816 10:02:44.974366    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:44.974448    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:44.974568    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:44.974660    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:44.974744    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:44.974844    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:44.974966    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:02:44.974973    3758 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 10:02:45.036267    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723827764.932365297
	
	I0816 10:02:45.036285    3758 fix.go:216] guest clock: 1723827764.932365297
	I0816 10:02:45.036291    3758 fix.go:229] Guest: 2024-08-16 10:02:44.932365297 -0700 PDT Remote: 2024-08-16 10:02:44.97436 -0700 PDT m=+56.899083401 (delta=-41.994703ms)
	I0816 10:02:45.036301    3758 fix.go:200] guest clock delta is within tolerance: -41.994703ms
	I0816 10:02:45.036306    3758 start.go:83] releasing machines lock for "ha-286000-m02", held for 14.298063452s
	I0816 10:02:45.036338    3758 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:02:45.036468    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetIP
	I0816 10:02:45.062334    3758 out.go:177] * Found network options:
	I0816 10:02:45.105996    3758 out.go:177]   - NO_PROXY=192.169.0.5
	W0816 10:02:45.128174    3758 proxy.go:119] fail to check proxy env: Error ip not in block
	I0816 10:02:45.128216    3758 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:02:45.128829    3758 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:02:45.128969    3758 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:02:45.129060    3758 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 10:02:45.129088    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	W0816 10:02:45.129115    3758 proxy.go:119] fail to check proxy env: Error ip not in block
	I0816 10:02:45.129222    3758 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0816 10:02:45.129232    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:45.129238    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:45.129370    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:45.129393    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:45.129500    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:45.129515    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:45.129631    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/id_rsa Username:docker}
	I0816 10:02:45.129647    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:45.129727    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/id_rsa Username:docker}
	W0816 10:02:45.164265    3758 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 10:02:45.164324    3758 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 10:02:45.211468    3758 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 10:02:45.211489    3758 start.go:495] detecting cgroup driver to use...
	I0816 10:02:45.211595    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 10:02:45.228144    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0816 10:02:45.237310    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0816 10:02:45.246272    3758 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0816 10:02:45.246316    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0816 10:02:45.255987    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 10:02:45.265400    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0816 10:02:45.274661    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 10:02:45.283628    3758 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 10:02:45.292648    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0816 10:02:45.302207    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0816 10:02:45.311314    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0816 10:02:45.320414    3758 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 10:02:45.328532    3758 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 10:02:45.337229    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:45.444954    3758 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0816 10:02:45.464569    3758 start.go:495] detecting cgroup driver to use...
	I0816 10:02:45.464652    3758 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0816 10:02:45.488292    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 10:02:45.500162    3758 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 10:02:45.561835    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 10:02:45.572027    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 10:02:45.582122    3758 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0816 10:02:45.603011    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 10:02:45.613488    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 10:02:45.628434    3758 ssh_runner.go:195] Run: which cri-dockerd
	I0816 10:02:45.631358    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0816 10:02:45.638496    3758 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0816 10:02:45.652676    3758 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0816 10:02:45.755971    3758 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0816 10:02:45.860533    3758 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0816 10:02:45.860559    3758 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0816 10:02:45.874570    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:45.976095    3758 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0816 10:02:48.459358    3758 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.483288325s)
	I0816 10:02:48.459417    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0816 10:02:48.469882    3758 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0816 10:02:48.484429    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0816 10:02:48.495038    3758 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0816 10:02:48.597129    3758 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0816 10:02:48.691412    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:48.797592    3758 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0816 10:02:48.812075    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0816 10:02:48.823307    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:48.918652    3758 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0816 10:02:48.980191    3758 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0816 10:02:48.980257    3758 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0816 10:02:48.984954    3758 start.go:563] Will wait 60s for crictl version
	I0816 10:02:48.985011    3758 ssh_runner.go:195] Run: which crictl
	I0816 10:02:48.988185    3758 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 10:02:49.015262    3758 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0816 10:02:49.015344    3758 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0816 10:02:49.032341    3758 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0816 10:02:49.078128    3758 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0816 10:02:49.137645    3758 out.go:177]   - env NO_PROXY=192.169.0.5
	I0816 10:02:49.176661    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetIP
	I0816 10:02:49.177129    3758 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0816 10:02:49.181778    3758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 10:02:49.191522    3758 mustload.go:65] Loading cluster: ha-286000
	I0816 10:02:49.191665    3758 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:02:49.191885    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:02:49.191909    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:02:49.200721    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51064
	I0816 10:02:49.201069    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:02:49.201413    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:02:49.201424    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:02:49.201648    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:02:49.201776    3758 main.go:141] libmachine: (ha-286000) Calling .GetState
	I0816 10:02:49.201862    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:49.201960    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:02:49.202920    3758 host.go:66] Checking if "ha-286000" exists ...
	I0816 10:02:49.203188    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:02:49.203205    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:02:49.211926    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51066
	I0816 10:02:49.212258    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:02:49.212635    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:02:49.212652    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:02:49.212860    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:02:49.212967    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:49.213056    3758 certs.go:68] Setting up /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000 for IP: 192.169.0.6
	I0816 10:02:49.213061    3758 certs.go:194] generating shared ca certs ...
	I0816 10:02:49.213071    3758 certs.go:226] acquiring lock for ca certs: {Name:mka549fbc467849834d2267da204028c49d88a3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:49.213229    3758 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key
	I0816 10:02:49.213305    3758 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key
	I0816 10:02:49.213314    3758 certs.go:256] generating profile certs ...
	I0816 10:02:49.213406    3758 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.key
	I0816 10:02:49.213429    3758 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.5eb44438
	I0816 10:02:49.213443    3758 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.5eb44438 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.254]
	I0816 10:02:49.263058    3758 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.5eb44438 ...
	I0816 10:02:49.263077    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.5eb44438: {Name:mk266d5e842df442c104f85113e06d171d5a89ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:49.263395    3758 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.5eb44438 ...
	I0816 10:02:49.263404    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.5eb44438: {Name:mk1428912a4c1edaafe55f9bff302c19ad337785 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:49.263623    3758 certs.go:381] copying /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.5eb44438 -> /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt
	I0816 10:02:49.263846    3758 certs.go:385] copying /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.5eb44438 -> /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key
	I0816 10:02:49.264093    3758 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key
	I0816 10:02:49.264103    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0816 10:02:49.264125    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0816 10:02:49.264143    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0816 10:02:49.264163    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0816 10:02:49.264180    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0816 10:02:49.264199    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0816 10:02:49.264223    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0816 10:02:49.264242    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0816 10:02:49.264331    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem (1338 bytes)
	W0816 10:02:49.264378    3758 certs.go:480] ignoring /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831_empty.pem, impossibly tiny 0 bytes
	I0816 10:02:49.264386    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 10:02:49.264418    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem (1078 bytes)
	I0816 10:02:49.264449    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem (1123 bytes)
	I0816 10:02:49.264477    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem (1679 bytes)
	I0816 10:02:49.264545    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem (1708 bytes)
	I0816 10:02:49.264593    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> /usr/share/ca-certificates/18312.pem
	I0816 10:02:49.264614    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:02:49.264634    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem -> /usr/share/ca-certificates/1831.pem
	I0816 10:02:49.264663    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:49.264814    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:49.264897    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:49.265014    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:49.265108    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:02:49.296192    3758 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0816 10:02:49.299577    3758 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0816 10:02:49.312765    3758 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0816 10:02:49.316551    3758 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0816 10:02:49.332167    3758 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0816 10:02:49.336199    3758 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0816 10:02:49.349166    3758 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0816 10:02:49.354242    3758 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0816 10:02:49.364119    3758 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0816 10:02:49.367349    3758 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0816 10:02:49.375423    3758 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0816 10:02:49.378717    3758 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0816 10:02:49.386918    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 10:02:49.408036    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0816 10:02:49.428163    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 10:02:49.448952    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 10:02:49.468986    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0816 10:02:49.489674    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0816 10:02:49.509597    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 10:02:49.530175    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 10:02:49.550578    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem --> /usr/share/ca-certificates/18312.pem (1708 bytes)
	I0816 10:02:49.571266    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 10:02:49.590995    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem --> /usr/share/ca-certificates/1831.pem (1338 bytes)
	I0816 10:02:49.611766    3758 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0816 10:02:49.625222    3758 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0816 10:02:49.638852    3758 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0816 10:02:49.653497    3758 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0816 10:02:49.667098    3758 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0816 10:02:49.680935    3758 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0816 10:02:49.695437    3758 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0816 10:02:49.708684    3758 ssh_runner.go:195] Run: openssl version
	I0816 10:02:49.712979    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18312.pem && ln -fs /usr/share/ca-certificates/18312.pem /etc/ssl/certs/18312.pem"
	I0816 10:02:49.721565    3758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18312.pem
	I0816 10:02:49.725513    3758 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 16:57 /usr/share/ca-certificates/18312.pem
	I0816 10:02:49.725580    3758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18312.pem
	I0816 10:02:49.730364    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18312.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 10:02:49.739184    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 10:02:49.747494    3758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:02:49.750923    3758 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 16:48 /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:02:49.750955    3758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:02:49.755195    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 10:02:49.763703    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1831.pem && ln -fs /usr/share/ca-certificates/1831.pem /etc/ssl/certs/1831.pem"
	I0816 10:02:49.772914    3758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1831.pem
	I0816 10:02:49.776377    3758 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 16:57 /usr/share/ca-certificates/1831.pem
	I0816 10:02:49.776421    3758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1831.pem
	I0816 10:02:49.780731    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1831.pem /etc/ssl/certs/51391683.0"
	I0816 10:02:49.789078    3758 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 10:02:49.792242    3758 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0816 10:02:49.792277    3758 kubeadm.go:934] updating node {m02 192.169.0.6 8443 v1.31.0 docker true true} ...
	I0816 10:02:49.792332    3758 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-286000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-286000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 10:02:49.792349    3758 kube-vip.go:115] generating kube-vip config ...
	I0816 10:02:49.792381    3758 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0816 10:02:49.804469    3758 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0816 10:02:49.804511    3758 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0816 10:02:49.804567    3758 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 10:02:49.812921    3758 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0816 10:02:49.812983    3758 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0816 10:02:49.821031    3758 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256 -> /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubelet
	I0816 10:02:49.821035    3758 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256 -> /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubeadm
	I0816 10:02:49.821039    3758 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubectl
	I0816 10:02:51.911279    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0816 10:02:51.911374    3758 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0816 10:02:51.915115    3758 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0816 10:02:51.915150    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0816 10:02:52.537289    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0816 10:02:52.537383    3758 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0816 10:02:52.540898    3758 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0816 10:02:52.540929    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0816 10:02:53.267467    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 10:02:53.278589    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0816 10:02:53.278731    3758 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0816 10:02:53.282055    3758 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0816 10:02:53.282073    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0816 10:02:53.498714    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0816 10:02:53.506112    3758 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0816 10:02:53.519672    3758 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 10:02:53.533551    3758 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0816 10:02:53.548024    3758 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0816 10:02:53.551124    3758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 10:02:53.560470    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:53.659270    3758 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 10:02:53.674625    3758 host.go:66] Checking if "ha-286000" exists ...
	I0816 10:02:53.674900    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:02:53.674923    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:02:53.683891    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51093
	I0816 10:02:53.684256    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:02:53.684641    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:02:53.684658    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:02:53.684885    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:02:53.685003    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:53.685084    3758 start.go:317] joinCluster: &{Name:ha-286000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clu
sterName:ha-286000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpira
tion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 10:02:53.685155    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0816 10:02:53.685167    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:53.685248    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:53.685339    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:53.685423    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:53.685503    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:02:53.765911    3758 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:02:53.765972    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token mpmdnf.6rzc0wpb01e0t6lz --discovery-token-ca-cert-hash sha256:995590413ea712c4f7db2cf849d28264ebc291e6cd53d741ce1d22c1daaa1602 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-286000-m02 --control-plane --apiserver-advertise-address=192.169.0.6 --apiserver-bind-port=8443"
	I0816 10:03:22.070391    3758 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token mpmdnf.6rzc0wpb01e0t6lz --discovery-token-ca-cert-hash sha256:995590413ea712c4f7db2cf849d28264ebc291e6cd53d741ce1d22c1daaa1602 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-286000-m02 --control-plane --apiserver-advertise-address=192.169.0.6 --apiserver-bind-port=8443": (28.304928658s)
	I0816 10:03:22.070414    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0816 10:03:22.427732    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-286000-m02 minikube.k8s.io/updated_at=2024_08_16T10_03_22_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd minikube.k8s.io/name=ha-286000 minikube.k8s.io/primary=false
	I0816 10:03:22.531211    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-286000-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0816 10:03:22.629050    3758 start.go:319] duration metric: took 28.944509117s to joinCluster
	I0816 10:03:22.629105    3758 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:03:22.629360    3758 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:03:22.652598    3758 out.go:177] * Verifying Kubernetes components...
	I0816 10:03:22.726472    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:03:22.952731    3758 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 10:03:22.975401    3758 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19461-1276/kubeconfig
	I0816 10:03:22.975621    3758 kapi.go:59] client config for ha-286000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.key", CAFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xa202f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0816 10:03:22.975663    3758 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0816 10:03:22.975836    3758 node_ready.go:35] waiting up to 6m0s for node "ha-286000-m02" to be "Ready" ...
	I0816 10:03:22.975886    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:22.975891    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:22.975897    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:22.975901    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:22.983180    3758 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0816 10:03:23.476314    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:23.476365    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:23.476380    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:23.476394    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:23.502874    3758 round_trippers.go:574] Response Status: 200 OK in 26 milliseconds
	I0816 10:03:23.977290    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:23.977304    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:23.977311    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:23.977315    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:23.979495    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:24.477835    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:24.477851    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:24.477858    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:24.477861    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:24.480701    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:24.977514    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:24.977568    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:24.977580    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:24.977588    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:24.980856    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:24.981299    3758 node_ready.go:53] node "ha-286000-m02" has status "Ready":"False"
	I0816 10:03:25.476235    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:25.476252    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:25.476260    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:25.476277    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:25.478567    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:25.976418    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:25.976469    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:25.976477    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:25.976480    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:25.978730    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:26.476423    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:26.476439    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:26.476445    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:26.476448    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:26.478424    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:26.975919    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:26.975934    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:26.975940    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:26.975945    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:26.977809    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:27.475966    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:27.475981    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:27.475987    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:27.475990    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:27.477769    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:27.478121    3758 node_ready.go:53] node "ha-286000-m02" has status "Ready":"False"
	I0816 10:03:27.977123    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:27.977139    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:27.977146    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:27.977150    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:27.979266    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:28.477140    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:28.477226    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:28.477254    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:28.477264    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:28.480386    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:28.977368    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:28.977407    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:28.977420    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:28.977427    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:28.980479    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:29.477521    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:29.477545    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:29.477556    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:29.477565    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:29.481179    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:29.481510    3758 node_ready.go:53] node "ha-286000-m02" has status "Ready":"False"
	I0816 10:03:29.975906    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:29.975932    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:29.975943    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:29.975949    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:29.978633    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:30.477171    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:30.477189    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:30.477198    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:30.477201    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:30.480005    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:30.977947    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:30.977973    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:30.977993    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:30.977998    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:30.982219    3758 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 10:03:31.476252    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:31.476327    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:31.476341    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:31.476347    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:31.479417    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:31.975987    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:31.976012    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:31.976023    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:31.976029    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:31.979409    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:31.979880    3758 node_ready.go:53] node "ha-286000-m02" has status "Ready":"False"
	I0816 10:03:32.477180    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:32.477207    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:32.477229    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:32.477240    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:32.480686    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:32.976225    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:32.976250    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:32.976261    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:32.976269    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:32.979533    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:33.475956    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:33.475980    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:33.475991    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:33.475997    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:33.479025    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:33.977111    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:33.977185    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:33.977198    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:33.977204    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:33.980426    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:33.980922    3758 node_ready.go:53] node "ha-286000-m02" has status "Ready":"False"
	I0816 10:03:34.476455    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:34.476470    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:34.476477    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:34.476481    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:34.478517    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:34.976996    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:34.977037    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:34.977049    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:34.977084    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:34.980500    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:35.476045    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:35.476068    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:35.476079    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:35.476086    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:35.479058    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:35.975920    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:35.975944    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:35.975956    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:35.975962    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:35.979071    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:36.477845    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:36.477872    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:36.477885    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:36.477946    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:36.481401    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:36.481890    3758 node_ready.go:53] node "ha-286000-m02" has status "Ready":"False"
	I0816 10:03:36.977033    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:36.977057    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:36.977069    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:36.977076    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:36.980476    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:37.477095    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:37.477120    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:37.477133    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:37.477141    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:37.480485    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:37.976303    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:37.976320    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:37.976343    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:37.976348    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:37.978551    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:38.476492    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:38.476517    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.476528    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.476536    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:38.480190    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:38.976091    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:38.976114    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.976124    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.976129    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:38.979741    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:38.980051    3758 node_ready.go:49] node "ha-286000-m02" has status "Ready":"True"
	I0816 10:03:38.980066    3758 node_ready.go:38] duration metric: took 16.004516347s for node "ha-286000-m02" to be "Ready" ...
	I0816 10:03:38.980077    3758 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 10:03:38.980127    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0816 10:03:38.980135    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.980142    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.980152    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:38.982937    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:38.987546    3758 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-2kqjf" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:38.987591    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-2kqjf
	I0816 10:03:38.987597    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.987603    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.987607    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:38.989339    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:38.989748    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:38.989758    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.989764    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.989768    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:38.991324    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:38.991660    3758 pod_ready.go:93] pod "coredns-6f6b679f8f-2kqjf" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:38.991671    3758 pod_ready.go:82] duration metric: took 4.111381ms for pod "coredns-6f6b679f8f-2kqjf" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:38.991678    3758 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-rfbz7" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:38.991708    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-rfbz7
	I0816 10:03:38.991713    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.991718    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.991721    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:38.993245    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:38.993624    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:38.993631    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.993640    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.993644    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:38.995095    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:38.995419    3758 pod_ready.go:93] pod "coredns-6f6b679f8f-rfbz7" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:38.995427    3758 pod_ready.go:82] duration metric: took 3.744646ms for pod "coredns-6f6b679f8f-rfbz7" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:38.995433    3758 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:38.995467    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-286000
	I0816 10:03:38.995472    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.995478    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.995482    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:38.997007    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:38.997390    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:38.997397    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.997403    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.997406    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:38.998839    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:38.999151    3758 pod_ready.go:93] pod "etcd-ha-286000" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:38.999160    3758 pod_ready.go:82] duration metric: took 3.721879ms for pod "etcd-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:38.999165    3758 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:38.999202    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-286000-m02
	I0816 10:03:38.999207    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.999213    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.999217    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:39.001189    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:39.001547    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:39.001554    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:39.001559    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:39.001563    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:39.003004    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:39.003290    3758 pod_ready.go:93] pod "etcd-ha-286000-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:39.003298    3758 pod_ready.go:82] duration metric: took 4.127582ms for pod "etcd-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:39.003307    3758 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:39.177834    3758 request.go:632] Waited for 174.443582ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-286000
	I0816 10:03:39.177948    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-286000
	I0816 10:03:39.177963    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:39.177974    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:39.177981    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:39.181371    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:39.376438    3758 request.go:632] Waited for 194.538822ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:39.376562    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:39.376574    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:39.376586    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:39.376596    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:39.379989    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:39.380425    3758 pod_ready.go:93] pod "kube-apiserver-ha-286000" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:39.380437    3758 pod_ready.go:82] duration metric: took 377.131323ms for pod "kube-apiserver-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:39.380446    3758 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:39.576633    3758 request.go:632] Waited for 196.095353ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-286000-m02
	I0816 10:03:39.576687    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-286000-m02
	I0816 10:03:39.576696    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:39.576769    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:39.576793    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:39.580164    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:39.776642    3758 request.go:632] Waited for 195.66937ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:39.776725    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:39.776735    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:39.776746    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:39.776752    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:39.780273    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:39.780714    3758 pod_ready.go:93] pod "kube-apiserver-ha-286000-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:39.780723    3758 pod_ready.go:82] duration metric: took 400.274102ms for pod "kube-apiserver-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:39.780730    3758 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:39.977598    3758 request.go:632] Waited for 196.714211ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-286000
	I0816 10:03:39.977650    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-286000
	I0816 10:03:39.977659    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:39.977670    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:39.977679    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:39.981079    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:40.177460    3758 request.go:632] Waited for 195.94045ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:40.177528    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:40.177589    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:40.177603    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:40.177609    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:40.180971    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:40.181992    3758 pod_ready.go:93] pod "kube-controller-manager-ha-286000" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:40.182036    3758 pod_ready.go:82] duration metric: took 401.277819ms for pod "kube-controller-manager-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:40.182047    3758 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:40.376258    3758 request.go:632] Waited for 194.111468ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-286000-m02
	I0816 10:03:40.376327    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-286000-m02
	I0816 10:03:40.376336    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:40.376347    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:40.376355    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:40.379476    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:40.576506    3758 request.go:632] Waited for 196.409077ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:40.576552    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:40.576560    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:40.576571    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:40.576580    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:40.579964    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:40.580641    3758 pod_ready.go:93] pod "kube-controller-manager-ha-286000-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:40.580654    3758 pod_ready.go:82] duration metric: took 398.607786ms for pod "kube-controller-manager-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:40.580663    3758 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pt669" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:40.778194    3758 request.go:632] Waited for 197.469682ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pt669
	I0816 10:03:40.778324    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pt669
	I0816 10:03:40.778333    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:40.778345    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:40.778352    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:40.781925    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:40.976623    3758 request.go:632] Waited for 194.04689ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:40.976752    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:40.976761    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:40.976772    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:40.976780    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:40.980022    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:40.980473    3758 pod_ready.go:93] pod "kube-proxy-pt669" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:40.980485    3758 pod_ready.go:82] duration metric: took 399.822578ms for pod "kube-proxy-pt669" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:40.980494    3758 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-w4nt2" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:41.177361    3758 request.go:632] Waited for 196.811255ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w4nt2
	I0816 10:03:41.177533    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w4nt2
	I0816 10:03:41.177545    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:41.177556    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:41.177563    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:41.181543    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:41.377692    3758 request.go:632] Waited for 195.583238ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:41.377791    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:41.377802    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:41.377812    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:41.377819    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:41.381134    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:41.381716    3758 pod_ready.go:93] pod "kube-proxy-w4nt2" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:41.381726    3758 pod_ready.go:82] duration metric: took 401.234529ms for pod "kube-proxy-w4nt2" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:41.381733    3758 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:41.577020    3758 request.go:632] Waited for 195.246357ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-286000
	I0816 10:03:41.577073    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-286000
	I0816 10:03:41.577125    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:41.577138    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:41.577147    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:41.580051    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:41.777879    3758 request.go:632] Waited for 197.366145ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:41.777974    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:41.777983    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:41.777995    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:41.778003    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:41.781434    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:41.782012    3758 pod_ready.go:93] pod "kube-scheduler-ha-286000" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:41.782023    3758 pod_ready.go:82] duration metric: took 400.292624ms for pod "kube-scheduler-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:41.782051    3758 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:41.976356    3758 request.go:632] Waited for 194.23341ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-286000-m02
	I0816 10:03:41.976463    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-286000-m02
	I0816 10:03:41.976473    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:41.976485    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:41.976494    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:41.980088    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:42.176952    3758 request.go:632] Waited for 196.161737ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:42.177009    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:42.177018    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:42.177026    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:42.177083    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:42.182888    3758 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0816 10:03:42.183179    3758 pod_ready.go:93] pod "kube-scheduler-ha-286000-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:42.183188    3758 pod_ready.go:82] duration metric: took 401.133714ms for pod "kube-scheduler-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:42.183203    3758 pod_ready.go:39] duration metric: took 3.203173842s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 10:03:42.183221    3758 api_server.go:52] waiting for apiserver process to appear ...
	I0816 10:03:42.183274    3758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 10:03:42.195671    3758 api_server.go:72] duration metric: took 19.566910589s to wait for apiserver process to appear ...
	I0816 10:03:42.195682    3758 api_server.go:88] waiting for apiserver healthz status ...
	I0816 10:03:42.195698    3758 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0816 10:03:42.198837    3758 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0816 10:03:42.198870    3758 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0816 10:03:42.198874    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:42.198880    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:42.198884    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:42.199389    3758 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0816 10:03:42.199468    3758 api_server.go:141] control plane version: v1.31.0
	I0816 10:03:42.199478    3758 api_server.go:131] duration metric: took 3.791893ms to wait for apiserver health ...
	I0816 10:03:42.199486    3758 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 10:03:42.378048    3758 request.go:632] Waited for 178.500938ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0816 10:03:42.378156    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0816 10:03:42.378166    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:42.378178    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:42.378186    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:42.382776    3758 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 10:03:42.386068    3758 system_pods.go:59] 17 kube-system pods found
	I0816 10:03:42.386083    3758 system_pods.go:61] "coredns-6f6b679f8f-2kqjf" [be18cd09-3e3b-4749-bf29-7001a879f593] Running
	I0816 10:03:42.386087    3758 system_pods.go:61] "coredns-6f6b679f8f-rfbz7" [a593e27a-e38b-46c7-a603-44963c31c095] Running
	I0816 10:03:42.386090    3758 system_pods.go:61] "etcd-ha-286000" [c45bc623-befe-4e88-8da1-bb4f7d7be5b2] Running
	I0816 10:03:42.386093    3758 system_pods.go:61] "etcd-ha-286000-m02" [e14fbe0c-4527-4cbb-8c13-01c0b32a8d15] Running
	I0816 10:03:42.386096    3758 system_pods.go:61] "kindnet-t9kjf" [20c57f28-5e86-44a4-8a2a-07e1e240abf2] Running
	I0816 10:03:42.386099    3758 system_pods.go:61] "kindnet-whqxb" [1cc5291b-52ea-44b5-b87c-607d46b5281a] Running
	I0816 10:03:42.386102    3758 system_pods.go:61] "kube-apiserver-ha-286000" [c0d298a9-931b-4084-9e5f-01a88de27456] Running
	I0816 10:03:42.386104    3758 system_pods.go:61] "kube-apiserver-ha-286000-m02" [3666c131-2b88-483a-ab8c-b18cb8db7d6f] Running
	I0816 10:03:42.386120    3758 system_pods.go:61] "kube-controller-manager-ha-286000" [5a4f4c5d-d89a-4e5f-b022-479ca31f64cd] Running
	I0816 10:03:42.386127    3758 system_pods.go:61] "kube-controller-manager-ha-286000-m02" [a347c3d4-fa47-49ea-93a0-83087017a2bd] Running
	I0816 10:03:42.386130    3758 system_pods.go:61] "kube-proxy-pt669" [798c956f-8f08-465a-b0d9-90b65f703f80] Running
	I0816 10:03:42.386139    3758 system_pods.go:61] "kube-proxy-w4nt2" [79ce2248-f8fd-4a3b-aeb7-f81d7ad64564] Running
	I0816 10:03:42.386143    3758 system_pods.go:61] "kube-scheduler-ha-286000" [b4af80e0-cbb0-4257-a212-3585faff0875] Running
	I0816 10:03:42.386145    3758 system_pods.go:61] "kube-scheduler-ha-286000-m02" [89920e93-c959-47ec-8a2c-f2b63e8c3d3a] Running
	I0816 10:03:42.386153    3758 system_pods.go:61] "kube-vip-ha-286000" [b0f85be2-2afb-44b0-a701-161b24529349] Running
	I0816 10:03:42.386156    3758 system_pods.go:61] "kube-vip-ha-286000-m02" [4982dc53-946d-4f75-8e9e-452ef39ff2fa] Running
	I0816 10:03:42.386159    3758 system_pods.go:61] "storage-provisioner" [4805d53b-2db3-4092-a3f2-d4a854e93adc] Running
	I0816 10:03:42.386163    3758 system_pods.go:74] duration metric: took 186.675963ms to wait for pod list to return data ...
	I0816 10:03:42.386170    3758 default_sa.go:34] waiting for default service account to be created ...
	I0816 10:03:42.577030    3758 request.go:632] Waited for 190.795237ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0816 10:03:42.577183    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0816 10:03:42.577198    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:42.577209    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:42.577215    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:42.581020    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:42.581204    3758 default_sa.go:45] found service account: "default"
	I0816 10:03:42.581216    3758 default_sa.go:55] duration metric: took 195.045213ms for default service account to be created ...
	I0816 10:03:42.581224    3758 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 10:03:42.777160    3758 request.go:632] Waited for 195.848894ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0816 10:03:42.777252    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0816 10:03:42.777268    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:42.777285    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:42.777303    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:42.781637    3758 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 10:03:42.784962    3758 system_pods.go:86] 17 kube-system pods found
	I0816 10:03:42.784974    3758 system_pods.go:89] "coredns-6f6b679f8f-2kqjf" [be18cd09-3e3b-4749-bf29-7001a879f593] Running
	I0816 10:03:42.784978    3758 system_pods.go:89] "coredns-6f6b679f8f-rfbz7" [a593e27a-e38b-46c7-a603-44963c31c095] Running
	I0816 10:03:42.784981    3758 system_pods.go:89] "etcd-ha-286000" [c45bc623-befe-4e88-8da1-bb4f7d7be5b2] Running
	I0816 10:03:42.784984    3758 system_pods.go:89] "etcd-ha-286000-m02" [e14fbe0c-4527-4cbb-8c13-01c0b32a8d15] Running
	I0816 10:03:42.784987    3758 system_pods.go:89] "kindnet-t9kjf" [20c57f28-5e86-44a4-8a2a-07e1e240abf2] Running
	I0816 10:03:42.784990    3758 system_pods.go:89] "kindnet-whqxb" [1cc5291b-52ea-44b5-b87c-607d46b5281a] Running
	I0816 10:03:42.784992    3758 system_pods.go:89] "kube-apiserver-ha-286000" [c0d298a9-931b-4084-9e5f-01a88de27456] Running
	I0816 10:03:42.784995    3758 system_pods.go:89] "kube-apiserver-ha-286000-m02" [3666c131-2b88-483a-ab8c-b18cb8db7d6f] Running
	I0816 10:03:42.784997    3758 system_pods.go:89] "kube-controller-manager-ha-286000" [5a4f4c5d-d89a-4e5f-b022-479ca31f64cd] Running
	I0816 10:03:42.785001    3758 system_pods.go:89] "kube-controller-manager-ha-286000-m02" [a347c3d4-fa47-49ea-93a0-83087017a2bd] Running
	I0816 10:03:42.785003    3758 system_pods.go:89] "kube-proxy-pt669" [798c956f-8f08-465a-b0d9-90b65f703f80] Running
	I0816 10:03:42.785006    3758 system_pods.go:89] "kube-proxy-w4nt2" [79ce2248-f8fd-4a3b-aeb7-f81d7ad64564] Running
	I0816 10:03:42.785009    3758 system_pods.go:89] "kube-scheduler-ha-286000" [b4af80e0-cbb0-4257-a212-3585faff0875] Running
	I0816 10:03:42.785011    3758 system_pods.go:89] "kube-scheduler-ha-286000-m02" [89920e93-c959-47ec-8a2c-f2b63e8c3d3a] Running
	I0816 10:03:42.785015    3758 system_pods.go:89] "kube-vip-ha-286000" [b0f85be2-2afb-44b0-a701-161b24529349] Running
	I0816 10:03:42.785017    3758 system_pods.go:89] "kube-vip-ha-286000-m02" [4982dc53-946d-4f75-8e9e-452ef39ff2fa] Running
	I0816 10:03:42.785023    3758 system_pods.go:89] "storage-provisioner" [4805d53b-2db3-4092-a3f2-d4a854e93adc] Running
	I0816 10:03:42.785028    3758 system_pods.go:126] duration metric: took 203.80313ms to wait for k8s-apps to be running ...
	I0816 10:03:42.785034    3758 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 10:03:42.785088    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 10:03:42.796381    3758 system_svc.go:56] duration metric: took 11.343439ms WaitForService to wait for kubelet
	I0816 10:03:42.796397    3758 kubeadm.go:582] duration metric: took 20.16764951s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 10:03:42.796409    3758 node_conditions.go:102] verifying NodePressure condition ...
	I0816 10:03:42.977594    3758 request.go:632] Waited for 181.143005ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0816 10:03:42.977695    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0816 10:03:42.977706    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:42.977719    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:42.977724    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:42.981373    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:42.981988    3758 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 10:03:42.982013    3758 node_conditions.go:123] node cpu capacity is 2
	I0816 10:03:42.982039    3758 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 10:03:42.982043    3758 node_conditions.go:123] node cpu capacity is 2
	I0816 10:03:42.982046    3758 node_conditions.go:105] duration metric: took 185.637747ms to run NodePressure ...
	I0816 10:03:42.982055    3758 start.go:241] waiting for startup goroutines ...
	I0816 10:03:42.982079    3758 start.go:255] writing updated cluster config ...
	I0816 10:03:43.003740    3758 out.go:201] 
	I0816 10:03:43.024885    3758 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:03:43.024976    3758 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:03:43.047776    3758 out.go:177] * Starting "ha-286000-m03" control-plane node in "ha-286000" cluster
	I0816 10:03:43.137621    3758 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 10:03:43.137655    3758 cache.go:56] Caching tarball of preloaded images
	I0816 10:03:43.137861    3758 preload.go:172] Found /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0816 10:03:43.137884    3758 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 10:03:43.138022    3758 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:03:43.159475    3758 start.go:360] acquireMachinesLock for ha-286000-m03: {Name:mk0510f0432b1e24fd07d899155e85dff8756d5a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:03:43.159592    3758 start.go:364] duration metric: took 91.442µs to acquireMachinesLock for "ha-286000-m03"
	I0816 10:03:43.159625    3758 start.go:93] Provisioning new machine with config: &{Name:ha-286000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:ha-286000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ing
ress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:03:43.159736    3758 start.go:125] createHost starting for "m03" (driver="hyperkit")
	I0816 10:03:43.180569    3758 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0816 10:03:43.180719    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:03:43.180762    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:03:43.191087    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51098
	I0816 10:03:43.191558    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:03:43.191889    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:03:43.191899    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:03:43.192106    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:03:43.192302    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetMachineName
	I0816 10:03:43.192394    3758 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:03:43.192506    3758 start.go:159] libmachine.API.Create for "ha-286000" (driver="hyperkit")
	I0816 10:03:43.192526    3758 client.go:168] LocalClient.Create starting
	I0816 10:03:43.192559    3758 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem
	I0816 10:03:43.192606    3758 main.go:141] libmachine: Decoding PEM data...
	I0816 10:03:43.192620    3758 main.go:141] libmachine: Parsing certificate...
	I0816 10:03:43.192675    3758 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem
	I0816 10:03:43.192704    3758 main.go:141] libmachine: Decoding PEM data...
	I0816 10:03:43.192714    3758 main.go:141] libmachine: Parsing certificate...
	I0816 10:03:43.192726    3758 main.go:141] libmachine: Running pre-create checks...
	I0816 10:03:43.192731    3758 main.go:141] libmachine: (ha-286000-m03) Calling .PreCreateCheck
	I0816 10:03:43.192813    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:43.192833    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetConfigRaw
	I0816 10:03:43.202038    3758 main.go:141] libmachine: Creating machine...
	I0816 10:03:43.202062    3758 main.go:141] libmachine: (ha-286000-m03) Calling .Create
	I0816 10:03:43.202302    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:43.202574    3758 main.go:141] libmachine: (ha-286000-m03) DBG | I0816 10:03:43.202284    3848 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19461-1276/.minikube
	I0816 10:03:43.202705    3758 main.go:141] libmachine: (ha-286000-m03) Downloading /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19461-1276/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0816 10:03:43.529965    3758 main.go:141] libmachine: (ha-286000-m03) DBG | I0816 10:03:43.529902    3848 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/id_rsa...
	I0816 10:03:43.631057    3758 main.go:141] libmachine: (ha-286000-m03) DBG | I0816 10:03:43.630965    3848 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/ha-286000-m03.rawdisk...
	I0816 10:03:43.631074    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Writing magic tar header
	I0816 10:03:43.631083    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Writing SSH key tar header
	I0816 10:03:43.631711    3758 main.go:141] libmachine: (ha-286000-m03) DBG | I0816 10:03:43.631681    3848 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03 ...
	I0816 10:03:44.155270    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:44.155286    3758 main.go:141] libmachine: (ha-286000-m03) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/hyperkit.pid
	I0816 10:03:44.155318    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Using UUID 408efb18-ff91-428f-b414-6eb0d982a779
	I0816 10:03:44.181981    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Generated MAC 8a:e:de:5b:b5:8b
	I0816 10:03:44.182010    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000
	I0816 10:03:44.182059    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"408efb18-ff91-428f-b414-6eb0d982a779", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 10:03:44.182101    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"408efb18-ff91-428f-b414-6eb0d982a779", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 10:03:44.182228    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "408efb18-ff91-428f-b414-6eb0d982a779", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/ha-286000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/bzimage,/Users/jenkins/minikube-integration/19461-1276/.minikube/machine
s/ha-286000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000"}
	I0816 10:03:44.182306    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 408efb18-ff91-428f-b414-6eb0d982a779 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/ha-286000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/bzimage,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000"
	I0816 10:03:44.182332    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0816 10:03:44.185479    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 DEBUG: hyperkit: Pid is 3849
	I0816 10:03:44.185900    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Attempt 0
	I0816 10:03:44.185912    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:44.186025    3758 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid from json: 3849
	I0816 10:03:44.186994    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Searching for 8a:e:de:5b:b5:8b in /var/db/dhcpd_leases ...
	I0816 10:03:44.187062    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0816 10:03:44.187079    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0d7b0}
	I0816 10:03:44.187114    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:03:44.187142    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:03:44.187168    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:03:44.187185    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:03:44.193352    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0816 10:03:44.201781    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0816 10:03:44.202595    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:03:44.202610    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:03:44.202637    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:03:44.202653    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:03:44.588441    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0816 10:03:44.588458    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0816 10:03:44.703100    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:03:44.703117    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:03:44.703125    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:03:44.703135    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:03:44.703952    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0816 10:03:44.703963    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0816 10:03:46.188345    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Attempt 1
	I0816 10:03:46.188360    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:46.188480    3758 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid from json: 3849
	I0816 10:03:46.189255    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Searching for 8a:e:de:5b:b5:8b in /var/db/dhcpd_leases ...
	I0816 10:03:46.189306    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0816 10:03:46.189321    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0d7b0}
	I0816 10:03:46.189335    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:03:46.189352    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:03:46.189359    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:03:46.189385    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:03:48.190823    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Attempt 2
	I0816 10:03:48.190838    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:48.190916    3758 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid from json: 3849
	I0816 10:03:48.191692    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Searching for 8a:e:de:5b:b5:8b in /var/db/dhcpd_leases ...
	I0816 10:03:48.191747    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0816 10:03:48.191757    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0d7b0}
	I0816 10:03:48.191779    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:03:48.191787    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:03:48.191803    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:03:48.191815    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:03:50.191897    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Attempt 3
	I0816 10:03:50.191913    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:50.191977    3758 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid from json: 3849
	I0816 10:03:50.192744    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Searching for 8a:e:de:5b:b5:8b in /var/db/dhcpd_leases ...
	I0816 10:03:50.192793    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0816 10:03:50.192802    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0d7b0}
	I0816 10:03:50.192811    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:03:50.192820    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:03:50.192836    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:03:50.192850    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:03:50.310705    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:50 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0816 10:03:50.310770    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:50 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0816 10:03:50.310783    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:50 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0816 10:03:50.334054    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:50 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0816 10:03:52.193443    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Attempt 4
	I0816 10:03:52.193459    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:52.193535    3758 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid from json: 3849
	I0816 10:03:52.194319    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Searching for 8a:e:de:5b:b5:8b in /var/db/dhcpd_leases ...
	I0816 10:03:52.194367    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0816 10:03:52.194379    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0d7b0}
	I0816 10:03:52.194390    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:03:52.194399    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:03:52.194408    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:03:52.194419    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:03:54.195434    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Attempt 5
	I0816 10:03:54.195456    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:54.195540    3758 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid from json: 3849
	I0816 10:03:54.196406    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Searching for 8a:e:de:5b:b5:8b in /var/db/dhcpd_leases ...
	I0816 10:03:54.196510    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Found 6 entries in /var/db/dhcpd_leases!
	I0816 10:03:54.196530    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66c0d7f9}
	I0816 10:03:54.196551    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Found match: 8a:e:de:5b:b5:8b
	I0816 10:03:54.196553    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetConfigRaw
	I0816 10:03:54.196565    3758 main.go:141] libmachine: (ha-286000-m03) DBG | IP: 192.169.0.7
	I0816 10:03:54.197236    3758 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:03:54.197351    3758 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:03:54.197441    3758 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0816 10:03:54.197454    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetState
	I0816 10:03:54.197541    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:54.197611    3758 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid from json: 3849
	I0816 10:03:54.198439    3758 main.go:141] libmachine: Detecting operating system of created instance...
	I0816 10:03:54.198446    3758 main.go:141] libmachine: Waiting for SSH to be available...
	I0816 10:03:54.198450    3758 main.go:141] libmachine: Getting to WaitForSSH function...
	I0816 10:03:54.198454    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:54.198532    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:54.198612    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:54.198695    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:54.198783    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:54.198892    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:03:54.199063    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:03:54.199071    3758 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0816 10:03:55.266165    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 10:03:55.266179    3758 main.go:141] libmachine: Detecting the provisioner...
	I0816 10:03:55.266185    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:55.266318    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:55.266413    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.266507    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.266601    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:55.266731    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:03:55.266879    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:03:55.266886    3758 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0816 10:03:55.332778    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0816 10:03:55.332822    3758 main.go:141] libmachine: found compatible host: buildroot
	I0816 10:03:55.332828    3758 main.go:141] libmachine: Provisioning with buildroot...
	I0816 10:03:55.332834    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetMachineName
	I0816 10:03:55.332971    3758 buildroot.go:166] provisioning hostname "ha-286000-m03"
	I0816 10:03:55.332982    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetMachineName
	I0816 10:03:55.333079    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:55.333186    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:55.333277    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.333375    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.333463    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:55.333593    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:03:55.333737    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:03:55.333746    3758 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-286000-m03 && echo "ha-286000-m03" | sudo tee /etc/hostname
	I0816 10:03:55.411701    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-286000-m03
	
	I0816 10:03:55.411715    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:55.411851    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:55.411965    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.412057    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.412140    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:55.412274    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:03:55.412418    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:03:55.412429    3758 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-286000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-286000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-286000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 10:03:55.485102    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 10:03:55.485118    3758 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19461-1276/.minikube CaCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19461-1276/.minikube}
	I0816 10:03:55.485127    3758 buildroot.go:174] setting up certificates
	I0816 10:03:55.485134    3758 provision.go:84] configureAuth start
	I0816 10:03:55.485141    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetMachineName
	I0816 10:03:55.485284    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetIP
	I0816 10:03:55.485375    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:55.485445    3758 provision.go:143] copyHostCerts
	I0816 10:03:55.485474    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem
	I0816 10:03:55.485527    3758 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem, removing ...
	I0816 10:03:55.485533    3758 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem
	I0816 10:03:55.485654    3758 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem (1123 bytes)
	I0816 10:03:55.485864    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem
	I0816 10:03:55.485895    3758 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem, removing ...
	I0816 10:03:55.485900    3758 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem
	I0816 10:03:55.486003    3758 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem (1679 bytes)
	I0816 10:03:55.486172    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem
	I0816 10:03:55.486206    3758 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem, removing ...
	I0816 10:03:55.486215    3758 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem
	I0816 10:03:55.486286    3758 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem (1078 bytes)
	I0816 10:03:55.486435    3758 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem org=jenkins.ha-286000-m03 san=[127.0.0.1 192.169.0.7 ha-286000-m03 localhost minikube]
	I0816 10:03:55.572803    3758 provision.go:177] copyRemoteCerts
	I0816 10:03:55.572855    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 10:03:55.572869    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:55.573015    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:55.573114    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.573208    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:55.573304    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/id_rsa Username:docker}
	I0816 10:03:55.612104    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0816 10:03:55.612186    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0816 10:03:55.632484    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0816 10:03:55.632548    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 10:03:55.652677    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0816 10:03:55.652752    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0816 10:03:55.672555    3758 provision.go:87] duration metric: took 187.4165ms to configureAuth
	I0816 10:03:55.672568    3758 buildroot.go:189] setting minikube options for container-runtime
	I0816 10:03:55.672735    3758 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:03:55.672748    3758 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:03:55.672889    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:55.672982    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:55.673071    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.673157    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.673245    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:55.673371    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:03:55.673496    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:03:55.673504    3758 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0816 10:03:55.738053    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0816 10:03:55.738064    3758 buildroot.go:70] root file system type: tmpfs
	I0816 10:03:55.738156    3758 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0816 10:03:55.738167    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:55.738306    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:55.738397    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.738489    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.738573    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:55.738694    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:03:55.738841    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:03:55.738890    3758 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0816 10:03:55.813774    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0816 10:03:55.813790    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:55.813924    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:55.814019    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.814103    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.814193    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:55.814320    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:03:55.814470    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:03:55.814484    3758 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0816 10:03:57.356529    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0816 10:03:57.356544    3758 main.go:141] libmachine: Checking connection to Docker...
	I0816 10:03:57.356549    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetURL
	I0816 10:03:57.356692    3758 main.go:141] libmachine: Docker is up and running!
	I0816 10:03:57.356700    3758 main.go:141] libmachine: Reticulating splines...
	I0816 10:03:57.356705    3758 client.go:171] duration metric: took 14.164443579s to LocalClient.Create
	I0816 10:03:57.356717    3758 start.go:167] duration metric: took 14.164482312s to libmachine.API.Create "ha-286000"
	I0816 10:03:57.356727    3758 start.go:293] postStartSetup for "ha-286000-m03" (driver="hyperkit")
	I0816 10:03:57.356734    3758 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 10:03:57.356744    3758 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:03:57.356919    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 10:03:57.356935    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:57.357030    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:57.357130    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:57.357273    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:57.357390    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/id_rsa Username:docker}
	I0816 10:03:57.398231    3758 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 10:03:57.402472    3758 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 10:03:57.402483    3758 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19461-1276/.minikube/addons for local assets ...
	I0816 10:03:57.402571    3758 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19461-1276/.minikube/files for local assets ...
	I0816 10:03:57.402723    3758 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> 18312.pem in /etc/ssl/certs
	I0816 10:03:57.402729    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> /etc/ssl/certs/18312.pem
	I0816 10:03:57.402895    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 10:03:57.412226    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem --> /etc/ssl/certs/18312.pem (1708 bytes)
	I0816 10:03:57.443242    3758 start.go:296] duration metric: took 86.507779ms for postStartSetup
	I0816 10:03:57.443268    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetConfigRaw
	I0816 10:03:57.443871    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetIP
	I0816 10:03:57.444031    3758 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:03:57.444386    3758 start.go:128] duration metric: took 14.284913269s to createHost
	I0816 10:03:57.444401    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:57.444490    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:57.444568    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:57.444658    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:57.444732    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:57.444842    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:03:57.444963    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:03:57.444971    3758 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 10:03:57.509634    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723827837.586061636
	
	I0816 10:03:57.509648    3758 fix.go:216] guest clock: 1723827837.586061636
	I0816 10:03:57.509653    3758 fix.go:229] Guest: 2024-08-16 10:03:57.586061636 -0700 PDT Remote: 2024-08-16 10:03:57.444395 -0700 PDT m=+129.370490857 (delta=141.666636ms)
	I0816 10:03:57.509665    3758 fix.go:200] guest clock delta is within tolerance: 141.666636ms
	I0816 10:03:57.509669    3758 start.go:83] releasing machines lock for "ha-286000-m03", held for 14.350339851s
	I0816 10:03:57.509691    3758 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:03:57.509832    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetIP
	I0816 10:03:57.542277    3758 out.go:177] * Found network options:
	I0816 10:03:57.562266    3758 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6
	W0816 10:03:57.583040    3758 proxy.go:119] fail to check proxy env: Error ip not in block
	W0816 10:03:57.583073    3758 proxy.go:119] fail to check proxy env: Error ip not in block
	I0816 10:03:57.583092    3758 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:03:57.583964    3758 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:03:57.584310    3758 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:03:57.584469    3758 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 10:03:57.584522    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	W0816 10:03:57.584570    3758 proxy.go:119] fail to check proxy env: Error ip not in block
	W0816 10:03:57.584596    3758 proxy.go:119] fail to check proxy env: Error ip not in block
	I0816 10:03:57.584708    3758 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0816 10:03:57.584735    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:57.584813    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:57.584995    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:57.585023    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:57.585164    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:57.585195    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:57.585338    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/id_rsa Username:docker}
	I0816 10:03:57.585406    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:57.585566    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/id_rsa Username:docker}
	W0816 10:03:57.623481    3758 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 10:03:57.623541    3758 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 10:03:57.670278    3758 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 10:03:57.670298    3758 start.go:495] detecting cgroup driver to use...
	I0816 10:03:57.670408    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 10:03:57.686134    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0816 10:03:57.694681    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0816 10:03:57.703134    3758 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0816 10:03:57.703198    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0816 10:03:57.711736    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 10:03:57.720041    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0816 10:03:57.728421    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 10:03:57.737252    3758 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 10:03:57.746356    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0816 10:03:57.755117    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0816 10:03:57.763962    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0816 10:03:57.772639    3758 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 10:03:57.780684    3758 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 10:03:57.788938    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:03:57.893932    3758 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0816 10:03:57.913150    3758 start.go:495] detecting cgroup driver to use...
	I0816 10:03:57.913224    3758 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0816 10:03:57.932274    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 10:03:57.945000    3758 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 10:03:57.965222    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 10:03:57.977460    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 10:03:57.988259    3758 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0816 10:03:58.006552    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 10:03:58.017261    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 10:03:58.032545    3758 ssh_runner.go:195] Run: which cri-dockerd
	I0816 10:03:58.035464    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0816 10:03:58.042742    3758 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0816 10:03:58.056747    3758 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0816 10:03:58.163471    3758 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0816 10:03:58.281020    3758 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0816 10:03:58.281044    3758 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0816 10:03:58.294992    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:03:58.399122    3758 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0816 10:04:59.415916    3758 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.017935847s)
	I0816 10:04:59.415981    3758 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0816 10:04:59.453653    3758 out.go:201] 
	W0816 10:04:59.474040    3758 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 16 17:03:56 ha-286000-m03 systemd[1]: Starting Docker Application Container Engine...
	Aug 16 17:03:56 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:56.200976866Z" level=info msg="Starting up"
	Aug 16 17:03:56 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:56.201455258Z" level=info msg="containerd not running, starting managed containerd"
	Aug 16 17:03:56 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:56.201908035Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=519
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.218394236Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.233514110Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.233584256Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.233654513Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.233690141Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.233768300Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.233806284Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.233955562Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.233996222Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.234026670Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.234054929Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.234137090Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.234382642Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.235934793Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.235985918Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.236117022Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.236159609Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.236256962Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.236327154Z" level=info msg="metadata content store policy set" policy=shared
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.238422470Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.238509436Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.238555940Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.238594343Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.238633954Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.238734892Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.238944917Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239083528Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239123172Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239153894Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239184820Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239214617Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239248728Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239285992Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239333696Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239368624Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239411950Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239451554Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239490333Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239523121Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239554681Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239589296Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239621390Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239653930Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239683266Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239712579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239742056Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239772828Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239801901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239830636Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239859570Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239893189Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239929555Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239960582Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239991787Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240076099Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240121809Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240153088Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240182438Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240210918Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240240067Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240268586Z" level=info msg="NRI interface is disabled by configuration."
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240520719Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240609661Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240710485Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240795617Z" level=info msg="containerd successfully booted in 0.023088s"
	Aug 16 17:03:57 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:57.221812848Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 16 17:03:57 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:57.228854621Z" level=info msg="Loading containers: start."
	Aug 16 17:03:57 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:57.312023793Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 16 17:03:57 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:57.390459780Z" level=info msg="Loading containers: done."
	Aug 16 17:03:57 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:57.404746652Z" level=info msg="Docker daemon" commit=f9522e5 containerd-snapshotter=false storage-driver=overlay2 version=27.1.2
	Aug 16 17:03:57 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:57.404864935Z" level=info msg="Daemon has completed initialization"
	Aug 16 17:03:57 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:57.430646618Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 16 17:03:57 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:57.430810646Z" level=info msg="API listen on [::]:2376"
	Aug 16 17:03:57 ha-286000-m03 systemd[1]: Started Docker Application Container Engine.
	Aug 16 17:03:58 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:58.488220464Z" level=info msg="Processing signal 'terminated'"
	Aug 16 17:03:58 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:58.489144852Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 16 17:03:58 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:58.489473398Z" level=info msg="Daemon shutdown complete"
	Aug 16 17:03:58 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:58.489515121Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 16 17:03:58 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:58.489527809Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 16 17:03:58 ha-286000-m03 systemd[1]: Stopping Docker Application Container Engine...
	Aug 16 17:03:59 ha-286000-m03 systemd[1]: docker.service: Deactivated successfully.
	Aug 16 17:03:59 ha-286000-m03 systemd[1]: Stopped Docker Application Container Engine.
	Aug 16 17:03:59 ha-286000-m03 systemd[1]: Starting Docker Application Container Engine...
	Aug 16 17:03:59 ha-286000-m03 dockerd[913]: time="2024-08-16T17:03:59.520198872Z" level=info msg="Starting up"
	Aug 16 17:04:59 ha-286000-m03 dockerd[913]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 16 17:04:59 ha-286000-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 16 17:04:59 ha-286000-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 16 17:04:59 ha-286000-m03 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0816 10:04:59.474111    3758 out.go:270] * 
	W0816 10:04:59.474938    3758 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 10:04:59.558192    3758 out.go:201] 
	
	
	==> Docker <==
	Aug 16 17:02:50 ha-286000 dockerd[1241]: time="2024-08-16T17:02:50.631803753Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:02:50 ha-286000 dockerd[1241]: time="2024-08-16T17:02:50.636334140Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 16 17:02:50 ha-286000 dockerd[1241]: time="2024-08-16T17:02:50.636542015Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 16 17:02:50 ha-286000 dockerd[1241]: time="2024-08-16T17:02:50.636768594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:02:50 ha-286000 dockerd[1241]: time="2024-08-16T17:02:50.637206626Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:02:50 ha-286000 cri-dockerd[1134]: time="2024-08-16T17:02:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fbd84fb813c9034ce56be933a9dc0c8539c5c831abbd163996da762065f0c208/resolv.conf as [nameserver 192.169.0.1]"
	Aug 16 17:02:50 ha-286000 cri-dockerd[1134]: time="2024-08-16T17:02:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/452942e267927b3a15327ece33bffe6fb305db22e6a72ff9b65d4acfe89f3891/resolv.conf as [nameserver 192.169.0.1]"
	Aug 16 17:02:50 ha-286000 dockerd[1241]: time="2024-08-16T17:02:50.842515621Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 16 17:02:50 ha-286000 dockerd[1241]: time="2024-08-16T17:02:50.842901741Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 16 17:02:50 ha-286000 dockerd[1241]: time="2024-08-16T17:02:50.843146100Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:02:50 ha-286000 dockerd[1241]: time="2024-08-16T17:02:50.843415719Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:02:50 ha-286000 dockerd[1241]: time="2024-08-16T17:02:50.885181891Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 16 17:02:50 ha-286000 dockerd[1241]: time="2024-08-16T17:02:50.885227629Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 16 17:02:50 ha-286000 dockerd[1241]: time="2024-08-16T17:02:50.885240492Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:02:50 ha-286000 dockerd[1241]: time="2024-08-16T17:02:50.885438543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:05:03 ha-286000 dockerd[1241]: time="2024-08-16T17:05:03.188642191Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 16 17:05:03 ha-286000 dockerd[1241]: time="2024-08-16T17:05:03.188710762Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 16 17:05:03 ha-286000 dockerd[1241]: time="2024-08-16T17:05:03.188723920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:05:03 ha-286000 dockerd[1241]: time="2024-08-16T17:05:03.188799320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:05:03 ha-286000 cri-dockerd[1134]: time="2024-08-16T17:05:03Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1873ade92edb9d51940849fdee8cb6db41b03368956580ec6099a918aff580e1/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Aug 16 17:05:04 ha-286000 cri-dockerd[1134]: time="2024-08-16T17:05:04Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Aug 16 17:05:04 ha-286000 dockerd[1241]: time="2024-08-16T17:05:04.522783748Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 16 17:05:04 ha-286000 dockerd[1241]: time="2024-08-16T17:05:04.522878436Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 16 17:05:04 ha-286000 dockerd[1241]: time="2024-08-16T17:05:04.522904596Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:05:04 ha-286000 dockerd[1241]: time="2024-08-16T17:05:04.523003751Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5bbe7a8cc492e       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   11 minutes ago      Running             busybox                   0                   1873ade92edb9       busybox-7dff88458-dvmvk
	bcd7170b050a5       cbb01a7bd410d                                                                                         14 minutes ago      Running             coredns                   0                   452942e267927       coredns-6f6b679f8f-rfbz7
	60d3d03e297ce       cbb01a7bd410d                                                                                         14 minutes ago      Running             coredns                   0                   fbd84fb813c90       coredns-6f6b679f8f-2kqjf
	f55b59f53c6eb       6e38f40d628db                                                                                         14 minutes ago      Running             storage-provisioner       0                   482990a4b00e6       storage-provisioner
	2677394dcd66f       kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166              14 minutes ago      Running             kindnet-cni               0                   afaf657e85c32       kindnet-whqxb
	81f6c96d46494       ad83b2ca7b09e                                                                                         14 minutes ago      Running             kube-proxy                0                   dfb822fc27b5e       kube-proxy-w4nt2
	cafa34c562392       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     14 minutes ago      Running             kube-vip                  0                   69fba128b04a6       kube-vip-ha-286000
	8f5867ee99d9b       045733566833c                                                                                         14 minutes ago      Running             kube-controller-manager   0                   e87fd3e77384f       kube-controller-manager-ha-286000
	f7b2e9efdd94f       1766f54c897f0                                                                                         14 minutes ago      Running             kube-scheduler            0                   8e2b186980aff       kube-scheduler-ha-286000
	d8dadff6cec78       2e96e5913fc06                                                                                         14 minutes ago      Running             etcd                      0                   cdb14ff7d8896       etcd-ha-286000
	5f3adc202a7a2       604f5db92eaa8                                                                                         14 minutes ago      Running             kube-apiserver            0                   818ee6dafe6c9       kube-apiserver-ha-286000
	
	
	==> coredns [60d3d03e297c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:37081 - 62351 "HINFO IN 7437422972060865489.3041931607585121070. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.009759141s
	[INFO] 10.244.0.4:34542 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.00094119s
	[INFO] 10.244.0.4:38912 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 0.000921826s
	[INFO] 10.244.1.2:39585 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000070042s
	[INFO] 10.244.0.4:39673 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000101817s
	[INFO] 10.244.0.4:55820 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000225412s
	[INFO] 10.244.1.2:48427 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135793s
	[INFO] 10.244.1.2:33204 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.000791531s
	[INFO] 10.244.1.2:51238 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000081236s
	[INFO] 10.244.1.2:42705 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000135743s
	[INFO] 10.244.1.2:33254 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000106295s
	[INFO] 10.244.0.4:53900 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000067988s
	[INFO] 10.244.1.2:48994 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00006771s
	[INFO] 10.244.1.2:56734 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000138883s
	
	
	==> coredns [bcd7170b050a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:47813 - 35687 "HINFO IN 8431179596625010309.1478595720938230641. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.009077429s
	[INFO] 10.244.0.4:47786 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132847s
	[INFO] 10.244.0.4:40096 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.001354873s
	[INFO] 10.244.1.2:40884 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000096214s
	[INFO] 10.244.1.2:55655 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000050276s
	[INFO] 10.244.1.2:47690 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000614458s
	[INFO] 10.244.0.4:45344 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000076791s
	[INFO] 10.244.0.4:53101 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.001042655s
	[INFO] 10.244.0.4:43889 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000082483s
	[INFO] 10.244.0.4:59210 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.00074308s
	[INFO] 10.244.0.4:38429 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00010233s
	[INFO] 10.244.0.4:48679 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00007214s
	[INFO] 10.244.1.2:43879 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000062278s
	[INFO] 10.244.1.2:45902 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000100209s
	[INFO] 10.244.1.2:39740 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000071593s
	[INFO] 10.244.0.4:50878 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000077864s
	[INFO] 10.244.0.4:51260 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000078903s
	[INFO] 10.244.0.4:38206 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000049117s
	[INFO] 10.244.1.2:54952 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000071797s
	[INFO] 10.244.1.2:36478 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000063757s
	
	
	==> describe nodes <==
	Name:               ha-286000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-286000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd
	                    minikube.k8s.io/name=ha-286000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_16T10_02_26_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 17:02:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-286000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 17:16:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 17:15:42 +0000   Fri, 16 Aug 2024 17:02:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 17:15:42 +0000   Fri, 16 Aug 2024 17:02:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 17:15:42 +0000   Fri, 16 Aug 2024 17:02:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 17:15:42 +0000   Fri, 16 Aug 2024 17:02:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.5
	  Hostname:    ha-286000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 a56c711da9894c3c86f2a7af9fec2b53
	  System UUID:                ad96408c-0000-0000-89eb-d74e5b68d297
	  Boot ID:                    23e4d4eb-a603-45bf-aca4-fb0893407f5a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-dvmvk              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-6f6b679f8f-2kqjf             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     14m
	  kube-system                 coredns-6f6b679f8f-rfbz7             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     14m
	  kube-system                 etcd-ha-286000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         14m
	  kube-system                 kindnet-whqxb                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      14m
	  kube-system                 kube-apiserver-ha-286000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-ha-286000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-w4nt2                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-ha-286000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-vip-ha-286000                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14m                kube-proxy       
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)  kubelet          Node ha-286000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet          Node ha-286000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)  kubelet          Node ha-286000 status is now: NodeHasSufficientPID
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m                kubelet          Node ha-286000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m                kubelet          Node ha-286000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m                kubelet          Node ha-286000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m                node-controller  Node ha-286000 event: Registered Node ha-286000 in Controller
	  Normal  NodeReady                14m                kubelet          Node ha-286000 status is now: NodeReady
	  Normal  RegisteredNode           13m                node-controller  Node ha-286000 event: Registered Node ha-286000 in Controller
	
	
	Name:               ha-286000-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-286000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd
	                    minikube.k8s.io/name=ha-286000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_16T10_03_22_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 17:03:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-286000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 17:16:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 17:15:35 +0000   Fri, 16 Aug 2024 17:03:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 17:15:35 +0000   Fri, 16 Aug 2024 17:03:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 17:15:35 +0000   Fri, 16 Aug 2024 17:03:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 17:15:35 +0000   Fri, 16 Aug 2024 17:03:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.6
	  Hostname:    ha-286000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 42bf4a1b451f44ad925f50a6a94e4cff
	  System UUID:                f7634935-0000-0000-a751-ac72999da031
	  Boot ID:                    e82d142e-37b9-4938-81bc-f5bc1a2db23f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-k9m92                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ha-286000-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         13m
	  kube-system                 kindnet-t9kjf                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      13m
	  kube-system                 kube-apiserver-ha-286000-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-286000-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-pt669                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-286000-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-286000-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node ha-286000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node ha-286000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node ha-286000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node ha-286000-m02 event: Registered Node ha-286000-m02 in Controller
	  Normal  RegisteredNode           13m                node-controller  Node ha-286000-m02 event: Registered Node ha-286000-m02 in Controller
	
	
	==> dmesg <==
	[  +0.006640] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.760553] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +1.351722] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000003] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Aug16 17:02] systemd-fstab-generator[519]: Ignoring "noauto" option for root device
	[  +0.114039] systemd-fstab-generator[531]: Ignoring "noauto" option for root device
	[  +1.207693] kauditd_printk_skb: 42 callbacks suppressed
	[  +0.611653] systemd-fstab-generator[786]: Ignoring "noauto" option for root device
	[  +0.265175] systemd-fstab-generator[846]: Ignoring "noauto" option for root device
	[  +0.101006] systemd-fstab-generator[858]: Ignoring "noauto" option for root device
	[  +0.123308] systemd-fstab-generator[872]: Ignoring "noauto" option for root device
	[  +2.497195] systemd-fstab-generator[1086]: Ignoring "noauto" option for root device
	[  +0.110299] systemd-fstab-generator[1098]: Ignoring "noauto" option for root device
	[  +0.095670] systemd-fstab-generator[1110]: Ignoring "noauto" option for root device
	[  +0.122299] systemd-fstab-generator[1126]: Ignoring "noauto" option for root device
	[  +3.636333] systemd-fstab-generator[1227]: Ignoring "noauto" option for root device
	[  +0.056190] kauditd_printk_skb: 233 callbacks suppressed
	[  +2.505796] systemd-fstab-generator[1479]: Ignoring "noauto" option for root device
	[  +3.634449] systemd-fstab-generator[1611]: Ignoring "noauto" option for root device
	[  +0.055756] kauditd_printk_skb: 70 callbacks suppressed
	[  +6.910973] systemd-fstab-generator[2107]: Ignoring "noauto" option for root device
	[  +0.079351] kauditd_printk_skb: 72 callbacks suppressed
	[  +9.264773] kauditd_printk_skb: 51 callbacks suppressed
	[Aug16 17:03] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [d8dadff6cec7] <==
	{"level":"info","ts":"2024-08-16T17:03:20.601869Z","caller":"rafthttp/peer.go:133","msg":"starting remote peer","remote-peer-id":"9633c02797b6d34"}
	{"level":"info","ts":"2024-08-16T17:03:20.601961Z","caller":"rafthttp/pipeline.go:72","msg":"started HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"9633c02797b6d34"}
	{"level":"info","ts":"2024-08-16T17:03:20.602332Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"9633c02797b6d34"}
	{"level":"info","ts":"2024-08-16T17:03:20.602493Z","caller":"rafthttp/peer.go:137","msg":"started remote peer","remote-peer-id":"9633c02797b6d34"}
	{"level":"info","ts":"2024-08-16T17:03:20.602718Z","caller":"rafthttp/transport.go:317","msg":"added remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"9633c02797b6d34","remote-peer-urls":["https://192.169.0.6:2380"]}
	{"level":"info","ts":"2024-08-16T17:03:20.606154Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"9633c02797b6d34"}
	{"level":"info","ts":"2024-08-16T17:03:20.606173Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"9633c02797b6d34"}
	{"level":"info","ts":"2024-08-16T17:03:20.606385Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"9633c02797b6d34"}
	{"level":"info","ts":"2024-08-16T17:03:20.607792Z","caller":"etcdserver/server.go:1996","msg":"applied a configuration change through raft","local-member-id":"b8c6c7563d17d844","raft-conf-change":"ConfChangeAddLearnerNode","raft-conf-change-node-id":"9633c02797b6d34"}
	{"level":"info","ts":"2024-08-16T17:03:21.553074Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"9633c02797b6d34"}
	{"level":"info","ts":"2024-08-16T17:03:21.553169Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"9633c02797b6d34"}
	{"level":"info","ts":"2024-08-16T17:03:21.561367Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"9633c02797b6d34","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-08-16T17:03:21.561449Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"9633c02797b6d34"}
	{"level":"info","ts":"2024-08-16T17:03:21.561528Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"9633c02797b6d34"}
	{"level":"info","ts":"2024-08-16T17:03:21.571776Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"9633c02797b6d34","stream-type":"stream Message"}
	{"level":"info","ts":"2024-08-16T17:03:21.571882Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"9633c02797b6d34"}
	{"level":"info","ts":"2024-08-16T17:03:22.121813Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 switched to configuration voters=(676450350361439540 13314548521573537860)"}
	{"level":"info","ts":"2024-08-16T17:03:22.121979Z","caller":"membership/cluster.go:535","msg":"promote member","cluster-id":"b73189effde9bc63","local-member-id":"b8c6c7563d17d844"}
	{"level":"info","ts":"2024-08-16T17:03:22.122011Z","caller":"etcdserver/server.go:1996","msg":"applied a configuration change through raft","local-member-id":"b8c6c7563d17d844","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"9633c02797b6d34"}
	{"level":"warn","ts":"2024-08-16T17:05:03.077229Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.58419ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets/default/busybox-7dff88458\" ","response":"range_response_count:1 size:1985"}
	{"level":"info","ts":"2024-08-16T17:05:03.077410Z","caller":"traceutil/trace.go:171","msg":"trace[51649115] range","detail":"{range_begin:/registry/replicasets/default/busybox-7dff88458; range_end:; response_count:1; response_revision:920; }","duration":"112.799262ms","start":"2024-08-16T17:05:02.964599Z","end":"2024-08-16T17:05:03.077398Z","steps":["trace[51649115] 'agreement among raft nodes before linearized reading'  (duration: 72.831989ms)","trace[51649115] 'range keys from in-memory index tree'  (duration: 39.723026ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-16T17:05:03.078503Z","caller":"traceutil/trace.go:171","msg":"trace[1276232025] transaction","detail":"{read_only:false; response_revision:921; number_of_response:1; }","duration":"116.265354ms","start":"2024-08-16T17:05:02.962223Z","end":"2024-08-16T17:05:03.078489Z","steps":["trace[1276232025] 'process raft request'  (duration: 75.236605ms)","trace[1276232025] 'compare'  (duration: 39.643768ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-16T17:12:20.455094Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1242}
	{"level":"info","ts":"2024-08-16T17:12:20.478439Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1242,"took":"22.921055ms","hash":729855672,"current-db-size-bytes":3137536,"current-db-size":"3.1 MB","current-db-size-in-use-bytes":1589248,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-08-16T17:12:20.478900Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":729855672,"revision":1242,"compact-revision":-1}
	
	
	==> kernel <==
	 17:16:56 up 15 min,  0 users,  load average: 0.21, 0.20, 0.14
	Linux ha-286000 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [2677394dcd66] <==
	I0816 17:15:55.223123       1 main.go:322] Node ha-286000-m02 has CIDR [10.244.1.0/24] 
	I0816 17:16:05.227238       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0816 17:16:05.227482       1 main.go:299] handling current node
	I0816 17:16:05.227602       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0816 17:16:05.227679       1 main.go:322] Node ha-286000-m02 has CIDR [10.244.1.0/24] 
	I0816 17:16:15.225728       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0816 17:16:15.225955       1 main.go:299] handling current node
	I0816 17:16:15.226059       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0816 17:16:15.226166       1 main.go:322] Node ha-286000-m02 has CIDR [10.244.1.0/24] 
	I0816 17:16:25.225541       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0816 17:16:25.225628       1 main.go:299] handling current node
	I0816 17:16:25.225642       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0816 17:16:25.225648       1 main.go:322] Node ha-286000-m02 has CIDR [10.244.1.0/24] 
	I0816 17:16:35.223579       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0816 17:16:35.223618       1 main.go:299] handling current node
	I0816 17:16:35.223629       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0816 17:16:35.223637       1 main.go:322] Node ha-286000-m02 has CIDR [10.244.1.0/24] 
	I0816 17:16:45.222993       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0816 17:16:45.223332       1 main.go:299] handling current node
	I0816 17:16:45.223676       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0816 17:16:45.223947       1 main.go:322] Node ha-286000-m02 has CIDR [10.244.1.0/24] 
	I0816 17:16:55.222892       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0816 17:16:55.223037       1 main.go:299] handling current node
	I0816 17:16:55.223088       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0816 17:16:55.223105       1 main.go:322] Node ha-286000-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [5f3adc202a7a] <==
	I0816 17:02:22.200815       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0816 17:02:22.200855       1 cache.go:39] Caches are synced for autoregister controller
	E0816 17:02:22.203287       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I0816 17:02:22.246497       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0816 17:02:23.102546       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0816 17:02:23.105482       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0816 17:02:23.105513       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0816 17:02:23.438732       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0816 17:02:23.465745       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0816 17:02:23.506778       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0816 17:02:23.510486       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.5]
	I0816 17:02:23.511071       1 controller.go:615] quota admission added evaluator for: endpoints
	I0816 17:02:23.513769       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0816 17:02:24.114339       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0816 17:02:25.748425       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0816 17:02:25.762462       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0816 17:02:25.770706       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0816 17:02:29.466806       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0816 17:02:29.715365       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0816 17:16:53.377658       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:51358: use of closed network connection
	E0816 17:16:53.575639       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:51360: use of closed network connection
	E0816 17:16:53.910337       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:51365: use of closed network connection
	E0816 17:16:54.106751       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:51367: use of closed network connection
	E0816 17:16:54.415124       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:51372: use of closed network connection
	E0816 17:16:54.604288       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:51374: use of closed network connection
	
	
	==> kube-controller-manager [8f5867ee99d9] <==
	I0816 17:03:28.116114       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000-m02"
	I0816 17:03:28.213864       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000-m02"
	I0816 17:03:30.558882       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000-m02"
	I0816 17:03:39.011535       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000-m02"
	I0816 17:03:39.020670       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000-m02"
	I0816 17:03:43.141912       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000-m02"
	I0816 17:03:50.726543       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000-m02"
	I0816 17:05:02.843343       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="49.61706ms"
	I0816 17:05:02.889189       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="45.792718ms"
	I0816 17:05:02.932283       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="43.030665ms"
	I0816 17:05:03.088994       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="156.603755ms"
	I0816 17:05:03.131864       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="42.618017ms"
	I0816 17:05:03.132076       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="38.426µs"
	I0816 17:05:03.151764       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="18.908957ms"
	I0816 17:05:03.156449       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="69.081µs"
	I0816 17:05:04.790508       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="5.652445ms"
	I0816 17:05:04.790798       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="31.186µs"
	I0816 17:05:05.070211       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="30.141421ms"
	I0816 17:05:05.070269       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="20.554µs"
	I0816 17:05:22.202209       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000-m02"
	I0816 17:05:29.027322       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000"
	I0816 17:10:29.521403       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000-m02"
	I0816 17:10:35.744951       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000"
	I0816 17:15:35.199030       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000-m02"
	I0816 17:15:42.092912       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000"
	
	
	==> kube-proxy [81f6c96d4649] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0816 17:02:30.214569       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0816 17:02:30.222978       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E0816 17:02:30.223011       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0816 17:02:30.267797       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0816 17:02:30.267825       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0816 17:02:30.267843       1 server_linux.go:169] "Using iptables Proxier"
	I0816 17:02:30.270154       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0816 17:02:30.270311       1 server.go:483] "Version info" version="v1.31.0"
	I0816 17:02:30.270319       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 17:02:30.271352       1 config.go:197] "Starting service config controller"
	I0816 17:02:30.271358       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0816 17:02:30.271372       1 config.go:104] "Starting endpoint slice config controller"
	I0816 17:02:30.271375       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0816 17:02:30.271598       1 config.go:326] "Starting node config controller"
	I0816 17:02:30.271603       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0816 17:02:30.371824       1 shared_informer.go:320] Caches are synced for node config
	I0816 17:02:30.371859       1 shared_informer.go:320] Caches are synced for service config
	I0816 17:02:30.371867       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [f7b2e9efdd94] <==
	W0816 17:02:22.176847       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0816 17:02:22.176852       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 17:02:22.176967       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0816 17:02:22.176999       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0816 17:02:22.177013       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0816 17:02:22.179079       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0816 17:02:22.179253       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0816 17:02:22.179322       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0816 17:02:23.072963       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0816 17:02:23.073256       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 17:02:23.081000       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0816 17:02:23.081176       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 17:02:23.142220       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0816 17:02:23.142263       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 17:02:23.166962       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0816 17:02:23.167003       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 17:02:23.255186       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 17:02:23.255230       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 17:02:23.456273       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0816 17:02:23.456447       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0816 17:02:26.460413       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0816 17:05:02.827326       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-k9m92\": pod busybox-7dff88458-k9m92 is already assigned to node \"ha-286000-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-k9m92" node="ha-286000-m02"
	E0816 17:05:02.827387       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod a516859c-0da8-4ba9-a896-ac720495818f(default/busybox-7dff88458-k9m92) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-k9m92"
	E0816 17:05:02.827403       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-k9m92\": pod busybox-7dff88458-k9m92 is already assigned to node \"ha-286000-m02\"" pod="default/busybox-7dff88458-k9m92"
	I0816 17:05:02.827520       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-k9m92" node="ha-286000-m02"
	
	
	==> kubelet <==
	Aug 16 17:12:25 ha-286000 kubelet[2114]: E0816 17:12:25.669484    2114 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 16 17:12:25 ha-286000 kubelet[2114]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 16 17:12:25 ha-286000 kubelet[2114]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 16 17:12:25 ha-286000 kubelet[2114]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 16 17:12:25 ha-286000 kubelet[2114]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 16 17:13:25 ha-286000 kubelet[2114]: E0816 17:13:25.672677    2114 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 16 17:13:25 ha-286000 kubelet[2114]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 16 17:13:25 ha-286000 kubelet[2114]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 16 17:13:25 ha-286000 kubelet[2114]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 16 17:13:25 ha-286000 kubelet[2114]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 16 17:14:25 ha-286000 kubelet[2114]: E0816 17:14:25.670434    2114 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 16 17:14:25 ha-286000 kubelet[2114]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 16 17:14:25 ha-286000 kubelet[2114]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 16 17:14:25 ha-286000 kubelet[2114]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 16 17:14:25 ha-286000 kubelet[2114]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 16 17:15:25 ha-286000 kubelet[2114]: E0816 17:15:25.670848    2114 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 16 17:15:25 ha-286000 kubelet[2114]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 16 17:15:25 ha-286000 kubelet[2114]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 16 17:15:25 ha-286000 kubelet[2114]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 16 17:15:25 ha-286000 kubelet[2114]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 16 17:16:25 ha-286000 kubelet[2114]: E0816 17:16:25.672451    2114 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 16 17:16:25 ha-286000 kubelet[2114]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 16 17:16:25 ha-286000 kubelet[2114]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 16 17:16:25 ha-286000 kubelet[2114]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 16 17:16:25 ha-286000 kubelet[2114]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-286000 -n ha-286000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-286000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-7dff88458-99xmp
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/DeployApp]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-286000 describe pod busybox-7dff88458-99xmp
helpers_test.go:282: (dbg) kubectl --context ha-286000 describe pod busybox-7dff88458-99xmp:

                                                
                                                
-- stdout --
	Name:             busybox-7dff88458-99xmp
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7dff88458
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7dff88458
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wxcs4 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-wxcs4:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                  From               Message
	  ----     ------            ----                 ----               -------
	  Warning  FailedScheduling  11m                  default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  92s (x2 over 6m32s)  default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  92s (x3 over 11m)    default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DeployApp (714.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (3.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-286000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-286000 -- exec busybox-7dff88458-99xmp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:207: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p ha-286000 -- exec busybox-7dff88458-99xmp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3": exit status 1 (124.259493ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): pod busybox-7dff88458-99xmp does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:209: Pod busybox-7dff88458-99xmp could not resolve 'host.minikube.internal': exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-286000 -- exec busybox-7dff88458-dvmvk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-286000 -- exec busybox-7dff88458-dvmvk -- sh -c "ping -c 1 192.169.0.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-286000 -- exec busybox-7dff88458-k9m92 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-286000 -- exec busybox-7dff88458-k9m92 -- sh -c "ping -c 1 192.169.0.1"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-286000 -n ha-286000
helpers_test.go:244: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-286000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-286000 logs -n 25: (2.11569345s)
helpers_test.go:252: TestMultiControlPlane/serial/PingHostFromPods logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| kubectl | -p ha-286000 -- get pods -o          | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:15 PDT | 16 Aug 24 10:15 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- get pods -o          | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:15 PDT | 16 Aug 24 10:15 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- get pods -o          | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:15 PDT | 16 Aug 24 10:15 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- get pods -o          | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:15 PDT | 16 Aug 24 10:15 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- get pods -o          | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:15 PDT | 16 Aug 24 10:15 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- get pods -o          | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:15 PDT | 16 Aug 24 10:15 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- get pods -o          | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:15 PDT | 16 Aug 24 10:15 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- get pods -o          | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- get pods -o          | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- get pods -o          | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT |                     |
	|         | busybox-7dff88458-99xmp --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-dvmvk --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-k9m92 --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT |                     |
	|         | busybox-7dff88458-99xmp --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-dvmvk --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-k9m92 --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT |                     |
	|         | busybox-7dff88458-99xmp -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-dvmvk -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-k9m92 -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- get pods -o          | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT |                     |
	|         | busybox-7dff88458-99xmp              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-dvmvk              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-dvmvk -- sh        |           |         |         |                     |                     |
	|         | -c ping -c 1 192.169.0.1             |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-k9m92              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-k9m92 -- sh        |           |         |         |                     |                     |
	|         | -c ping -c 1 192.169.0.1             |           |         |         |                     |                     |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 10:01:48
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 10:01:48.113296    3758 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:01:48.113462    3758 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:01:48.113467    3758 out.go:358] Setting ErrFile to fd 2...
	I0816 10:01:48.113470    3758 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:01:48.113648    3758 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1276/.minikube/bin
	I0816 10:01:48.115192    3758 out.go:352] Setting JSON to false
	I0816 10:01:48.140204    3758 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":1878,"bootTime":1723825830,"procs":433,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0816 10:01:48.140316    3758 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 10:01:48.194498    3758 out.go:177] * [ha-286000] minikube v1.33.1 on Darwin 14.6.1
	I0816 10:01:48.236650    3758 notify.go:220] Checking for updates...
	I0816 10:01:48.262360    3758 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 10:01:48.320415    3758 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1276/kubeconfig
	I0816 10:01:48.381368    3758 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0816 10:01:48.452505    3758 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 10:01:48.474398    3758 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1276/.minikube
	I0816 10:01:48.495459    3758 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 10:01:48.520120    3758 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 10:01:48.550379    3758 out.go:177] * Using the hyperkit driver based on user configuration
	I0816 10:01:48.592331    3758 start.go:297] selected driver: hyperkit
	I0816 10:01:48.592358    3758 start.go:901] validating driver "hyperkit" against <nil>
	I0816 10:01:48.592382    3758 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 10:01:48.596757    3758 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:01:48.596907    3758 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19461-1276/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0816 10:01:48.605545    3758 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0816 10:01:48.609748    3758 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:01:48.609772    3758 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0816 10:01:48.609805    3758 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 10:01:48.610046    3758 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 10:01:48.610094    3758 cni.go:84] Creating CNI manager for ""
	I0816 10:01:48.610103    3758 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0816 10:01:48.610108    3758 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0816 10:01:48.610236    3758 start.go:340] cluster config:
	{Name:ha-286000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-286000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docke
r CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0816 10:01:48.610323    3758 iso.go:125] acquiring lock: {Name:mkb430b43b5e7a8a683454482519837ed996c78d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:01:48.632334    3758 out.go:177] * Starting "ha-286000" primary control-plane node in "ha-286000" cluster
	I0816 10:01:48.653456    3758 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 10:01:48.653537    3758 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0816 10:01:48.653563    3758 cache.go:56] Caching tarball of preloaded images
	I0816 10:01:48.653777    3758 preload.go:172] Found /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0816 10:01:48.653813    3758 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 10:01:48.654332    3758 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:01:48.654370    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json: {Name:mk79fdae2e45cdfc987caaf16c5f211150e90dc5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:01:48.654995    3758 start.go:360] acquireMachinesLock for ha-286000: {Name:mk0510f0432b1e24fd07d899155e85dff8756d5a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:01:48.655106    3758 start.go:364] duration metric: took 87.458µs to acquireMachinesLock for "ha-286000"
	I0816 10:01:48.655154    3758 start.go:93] Provisioning new machine with config: &{Name:ha-286000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:ha-286000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:01:48.655246    3758 start.go:125] createHost starting for "" (driver="hyperkit")
	I0816 10:01:48.677450    3758 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0816 10:01:48.677701    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:01:48.677786    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:01:48.687593    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51004
	I0816 10:01:48.687935    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:01:48.688347    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:01:48.688357    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:01:48.688562    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:01:48.688668    3758 main.go:141] libmachine: (ha-286000) Calling .GetMachineName
	I0816 10:01:48.688765    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:01:48.688895    3758 start.go:159] libmachine.API.Create for "ha-286000" (driver="hyperkit")
	I0816 10:01:48.688916    3758 client.go:168] LocalClient.Create starting
	I0816 10:01:48.688948    3758 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem
	I0816 10:01:48.688999    3758 main.go:141] libmachine: Decoding PEM data...
	I0816 10:01:48.689012    3758 main.go:141] libmachine: Parsing certificate...
	I0816 10:01:48.689063    3758 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem
	I0816 10:01:48.689100    3758 main.go:141] libmachine: Decoding PEM data...
	I0816 10:01:48.689111    3758 main.go:141] libmachine: Parsing certificate...
	I0816 10:01:48.689128    3758 main.go:141] libmachine: Running pre-create checks...
	I0816 10:01:48.689135    3758 main.go:141] libmachine: (ha-286000) Calling .PreCreateCheck
	I0816 10:01:48.689224    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:48.689427    3758 main.go:141] libmachine: (ha-286000) Calling .GetConfigRaw
	I0816 10:01:48.698771    3758 main.go:141] libmachine: Creating machine...
	I0816 10:01:48.698794    3758 main.go:141] libmachine: (ha-286000) Calling .Create
	I0816 10:01:48.699018    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:48.699296    3758 main.go:141] libmachine: (ha-286000) DBG | I0816 10:01:48.699015    3766 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19461-1276/.minikube
	I0816 10:01:48.699450    3758 main.go:141] libmachine: (ha-286000) Downloading /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19461-1276/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0816 10:01:48.884950    3758 main.go:141] libmachine: (ha-286000) DBG | I0816 10:01:48.884833    3766 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa...
	I0816 10:01:48.986348    3758 main.go:141] libmachine: (ha-286000) DBG | I0816 10:01:48.986275    3766 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/ha-286000.rawdisk...
	I0816 10:01:48.986388    3758 main.go:141] libmachine: (ha-286000) DBG | Writing magic tar header
	I0816 10:01:48.986396    3758 main.go:141] libmachine: (ha-286000) DBG | Writing SSH key tar header
	I0816 10:01:48.987038    3758 main.go:141] libmachine: (ha-286000) DBG | I0816 10:01:48.987004    3766 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000 ...
	I0816 10:01:49.360700    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:49.360726    3758 main.go:141] libmachine: (ha-286000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/hyperkit.pid
	I0816 10:01:49.360741    3758 main.go:141] libmachine: (ha-286000) DBG | Using UUID ad96de67-e238-408c-89eb-d74e5b68d297
	I0816 10:01:49.471852    3758 main.go:141] libmachine: (ha-286000) DBG | Generated MAC 66:c8:48:4e:12:1b
	I0816 10:01:49.471873    3758 main.go:141] libmachine: (ha-286000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000
	I0816 10:01:49.471908    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"ad96de67-e238-408c-89eb-d74e5b68d297", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0000b2330)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 10:01:49.471951    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"ad96de67-e238-408c-89eb-d74e5b68d297", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0000b2330)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 10:01:49.471996    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "ad96de67-e238-408c-89eb-d74e5b68d297", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/ha-286000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/bzimage,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000"}
	I0816 10:01:49.472043    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U ad96de67-e238-408c-89eb-d74e5b68d297 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/ha-286000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/console-ring -f kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/bzimage,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000"
	I0816 10:01:49.472063    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0816 10:01:49.475179    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 DEBUG: hyperkit: Pid is 3771
	I0816 10:01:49.475583    3758 main.go:141] libmachine: (ha-286000) DBG | Attempt 0
	I0816 10:01:49.475597    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:49.475674    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:01:49.476510    3758 main.go:141] libmachine: (ha-286000) DBG | Searching for 66:c8:48:4e:12:1b in /var/db/dhcpd_leases ...
	I0816 10:01:49.476583    3758 main.go:141] libmachine: (ha-286000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0816 10:01:49.476592    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:01:49.476602    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:01:49.476612    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:01:49.482826    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0816 10:01:49.534430    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0816 10:01:49.535040    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:01:49.535056    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:01:49.535064    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:01:49.535072    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:01:49.909510    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0816 10:01:49.909527    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0816 10:01:50.024439    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:01:50.024459    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:01:50.024479    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:01:50.024494    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:01:50.025363    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:50 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0816 10:01:50.025375    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:50 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0816 10:01:51.477573    3758 main.go:141] libmachine: (ha-286000) DBG | Attempt 1
	I0816 10:01:51.477587    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:51.477660    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:01:51.478481    3758 main.go:141] libmachine: (ha-286000) DBG | Searching for 66:c8:48:4e:12:1b in /var/db/dhcpd_leases ...
	I0816 10:01:51.478504    3758 main.go:141] libmachine: (ha-286000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0816 10:01:51.478511    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:01:51.478520    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:01:51.478531    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:01:53.479582    3758 main.go:141] libmachine: (ha-286000) DBG | Attempt 2
	I0816 10:01:53.479599    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:53.479760    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:01:53.480630    3758 main.go:141] libmachine: (ha-286000) DBG | Searching for 66:c8:48:4e:12:1b in /var/db/dhcpd_leases ...
	I0816 10:01:53.480678    3758 main.go:141] libmachine: (ha-286000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0816 10:01:53.480688    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:01:53.480697    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:01:53.480710    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:01:55.482614    3758 main.go:141] libmachine: (ha-286000) DBG | Attempt 3
	I0816 10:01:55.482630    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:55.482703    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:01:55.483616    3758 main.go:141] libmachine: (ha-286000) DBG | Searching for 66:c8:48:4e:12:1b in /var/db/dhcpd_leases ...
	I0816 10:01:55.483662    3758 main.go:141] libmachine: (ha-286000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0816 10:01:55.483672    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:01:55.483679    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:01:55.483687    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:01:55.581983    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:55 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0816 10:01:55.582063    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:55 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0816 10:01:55.582075    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:55 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0816 10:01:55.606414    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:55 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0816 10:01:57.484306    3758 main.go:141] libmachine: (ha-286000) DBG | Attempt 4
	I0816 10:01:57.484322    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:57.484414    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:01:57.485196    3758 main.go:141] libmachine: (ha-286000) DBG | Searching for 66:c8:48:4e:12:1b in /var/db/dhcpd_leases ...
	I0816 10:01:57.485261    3758 main.go:141] libmachine: (ha-286000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0816 10:01:57.485273    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:01:57.485286    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:01:57.485294    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:01:59.485978    3758 main.go:141] libmachine: (ha-286000) DBG | Attempt 5
	I0816 10:01:59.486008    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:59.486192    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:01:59.487610    3758 main.go:141] libmachine: (ha-286000) DBG | Searching for 66:c8:48:4e:12:1b in /var/db/dhcpd_leases ...
	I0816 10:01:59.487709    3758 main.go:141] libmachine: (ha-286000) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0816 10:01:59.487730    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:01:59.487784    3758 main.go:141] libmachine: (ha-286000) DBG | Found match: 66:c8:48:4e:12:1b
	I0816 10:01:59.487797    3758 main.go:141] libmachine: (ha-286000) DBG | IP: 192.169.0.5
	I0816 10:01:59.487831    3758 main.go:141] libmachine: (ha-286000) Calling .GetConfigRaw
	I0816 10:01:59.488860    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:01:59.489069    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:01:59.489276    3758 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0816 10:01:59.489289    3758 main.go:141] libmachine: (ha-286000) Calling .GetState
	I0816 10:01:59.489463    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:59.489567    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:01:59.490615    3758 main.go:141] libmachine: Detecting operating system of created instance...
	I0816 10:01:59.490628    3758 main.go:141] libmachine: Waiting for SSH to be available...
	I0816 10:01:59.490644    3758 main.go:141] libmachine: Getting to WaitForSSH function...
	I0816 10:01:59.490652    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:01:59.490782    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:01:59.490923    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:01:59.491055    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:01:59.491190    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:01:59.491362    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:01:59.491595    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:01:59.491605    3758 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0816 10:01:59.511220    3758 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0816 10:02:02.573095    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 10:02:02.573108    3758 main.go:141] libmachine: Detecting the provisioner...
	I0816 10:02:02.573113    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:02.573255    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:02.573385    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:02.573489    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:02.573602    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:02.573753    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:02.573906    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:02:02.573914    3758 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0816 10:02:02.632510    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0816 10:02:02.632561    3758 main.go:141] libmachine: found compatible host: buildroot
	I0816 10:02:02.632568    3758 main.go:141] libmachine: Provisioning with buildroot...
	I0816 10:02:02.632573    3758 main.go:141] libmachine: (ha-286000) Calling .GetMachineName
	I0816 10:02:02.632721    3758 buildroot.go:166] provisioning hostname "ha-286000"
	I0816 10:02:02.632732    3758 main.go:141] libmachine: (ha-286000) Calling .GetMachineName
	I0816 10:02:02.632834    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:02.632956    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:02.633046    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:02.633122    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:02.633236    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:02.633363    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:02.633492    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:02:02.633500    3758 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-286000 && echo "ha-286000" | sudo tee /etc/hostname
	I0816 10:02:02.702336    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-286000
	
	I0816 10:02:02.702354    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:02.702482    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:02.702569    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:02.702670    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:02.702756    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:02.702893    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:02.703040    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:02:02.703052    3758 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-286000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-286000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-286000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 10:02:02.766997    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 10:02:02.767016    3758 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19461-1276/.minikube CaCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19461-1276/.minikube}
	I0816 10:02:02.767026    3758 buildroot.go:174] setting up certificates
	I0816 10:02:02.767038    3758 provision.go:84] configureAuth start
	I0816 10:02:02.767044    3758 main.go:141] libmachine: (ha-286000) Calling .GetMachineName
	I0816 10:02:02.767175    3758 main.go:141] libmachine: (ha-286000) Calling .GetIP
	I0816 10:02:02.767270    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:02.767372    3758 provision.go:143] copyHostCerts
	I0816 10:02:02.767406    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem
	I0816 10:02:02.767476    3758 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem, removing ...
	I0816 10:02:02.767485    3758 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem
	I0816 10:02:02.767630    3758 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem (1679 bytes)
	I0816 10:02:02.767837    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem
	I0816 10:02:02.767868    3758 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem, removing ...
	I0816 10:02:02.767873    3758 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem
	I0816 10:02:02.767991    3758 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem (1078 bytes)
	I0816 10:02:02.768146    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem
	I0816 10:02:02.768179    3758 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem, removing ...
	I0816 10:02:02.768184    3758 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem
	I0816 10:02:02.768268    3758 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem (1123 bytes)
	I0816 10:02:02.768410    3758 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem org=jenkins.ha-286000 san=[127.0.0.1 192.169.0.5 ha-286000 localhost minikube]
	I0816 10:02:02.915225    3758 provision.go:177] copyRemoteCerts
	I0816 10:02:02.915279    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 10:02:02.915299    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:02.915444    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:02.915558    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:02.915640    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:02.915723    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:02:02.953069    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0816 10:02:02.953145    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0816 10:02:02.973516    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0816 10:02:02.973600    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0816 10:02:02.993038    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0816 10:02:02.993100    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 10:02:03.013565    3758 provision.go:87] duration metric: took 246.514857ms to configureAuth
	I0816 10:02:03.013581    3758 buildroot.go:189] setting minikube options for container-runtime
	I0816 10:02:03.013724    3758 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:02:03.013737    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:03.013889    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:03.013978    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:03.014067    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:03.014147    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:03.014250    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:03.014369    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:03.014498    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:02:03.014506    3758 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0816 10:02:03.073645    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0816 10:02:03.073660    3758 buildroot.go:70] root file system type: tmpfs
	I0816 10:02:03.073734    3758 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0816 10:02:03.073745    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:03.073872    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:03.073966    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:03.074063    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:03.074164    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:03.074292    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:03.074429    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:02:03.074479    3758 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0816 10:02:03.148891    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0816 10:02:03.148912    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:03.149052    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:03.149138    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:03.149235    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:03.149324    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:03.149466    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:03.149604    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:02:03.149616    3758 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0816 10:02:04.780498    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0816 10:02:04.780519    3758 main.go:141] libmachine: Checking connection to Docker...
	I0816 10:02:04.780526    3758 main.go:141] libmachine: (ha-286000) Calling .GetURL
	I0816 10:02:04.780691    3758 main.go:141] libmachine: Docker is up and running!
	I0816 10:02:04.780698    3758 main.go:141] libmachine: Reticulating splines...
	I0816 10:02:04.780702    3758 client.go:171] duration metric: took 16.091876944s to LocalClient.Create
	I0816 10:02:04.780713    3758 start.go:167] duration metric: took 16.091916939s to libmachine.API.Create "ha-286000"
	I0816 10:02:04.780722    3758 start.go:293] postStartSetup for "ha-286000" (driver="hyperkit")
	I0816 10:02:04.780729    3758 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 10:02:04.780738    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:04.780873    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 10:02:04.780884    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:04.780956    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:04.781047    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:04.781154    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:04.781275    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:02:04.819650    3758 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 10:02:04.823314    3758 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 10:02:04.823335    3758 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19461-1276/.minikube/addons for local assets ...
	I0816 10:02:04.823443    3758 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19461-1276/.minikube/files for local assets ...
	I0816 10:02:04.823624    3758 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> 18312.pem in /etc/ssl/certs
	I0816 10:02:04.823631    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> /etc/ssl/certs/18312.pem
	I0816 10:02:04.823837    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 10:02:04.831860    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem --> /etc/ssl/certs/18312.pem (1708 bytes)
	I0816 10:02:04.850901    3758 start.go:296] duration metric: took 70.172878ms for postStartSetup
	I0816 10:02:04.850935    3758 main.go:141] libmachine: (ha-286000) Calling .GetConfigRaw
	I0816 10:02:04.851561    3758 main.go:141] libmachine: (ha-286000) Calling .GetIP
	I0816 10:02:04.851702    3758 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:02:04.852053    3758 start.go:128] duration metric: took 16.196888252s to createHost
	I0816 10:02:04.852071    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:04.852167    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:04.852261    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:04.852344    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:04.852420    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:04.852525    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:04.852648    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:02:04.852655    3758 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 10:02:04.911299    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723827724.986438448
	
	I0816 10:02:04.911317    3758 fix.go:216] guest clock: 1723827724.986438448
	I0816 10:02:04.911322    3758 fix.go:229] Guest: 2024-08-16 10:02:04.986438448 -0700 PDT Remote: 2024-08-16 10:02:04.852061 -0700 PDT m=+16.776125584 (delta=134.377448ms)
	I0816 10:02:04.911341    3758 fix.go:200] guest clock delta is within tolerance: 134.377448ms
	I0816 10:02:04.911345    3758 start.go:83] releasing machines lock for "ha-286000", held for 16.256325696s
	I0816 10:02:04.911364    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:04.911498    3758 main.go:141] libmachine: (ha-286000) Calling .GetIP
	I0816 10:02:04.911592    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:04.911942    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:04.912068    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:04.912144    3758 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 10:02:04.912171    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:04.912218    3758 ssh_runner.go:195] Run: cat /version.json
	I0816 10:02:04.912231    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:04.912262    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:04.912305    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:04.912360    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:04.912384    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:04.912437    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:04.912464    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:04.912547    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:02:04.912567    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:02:04.945688    3758 ssh_runner.go:195] Run: systemctl --version
	I0816 10:02:04.997158    3758 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 10:02:05.002278    3758 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 10:02:05.002315    3758 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 10:02:05.015089    3758 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 10:02:05.015098    3758 start.go:495] detecting cgroup driver to use...
	I0816 10:02:05.015195    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 10:02:05.030338    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0816 10:02:05.038588    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0816 10:02:05.046898    3758 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0816 10:02:05.046932    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0816 10:02:05.055211    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 10:02:05.063533    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0816 10:02:05.073244    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 10:02:05.081895    3758 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 10:02:05.090915    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0816 10:02:05.099773    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0816 10:02:05.108719    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0816 10:02:05.117772    3758 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 10:02:05.125574    3758 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 10:02:05.145403    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:05.261568    3758 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0816 10:02:05.278920    3758 start.go:495] detecting cgroup driver to use...
	I0816 10:02:05.278996    3758 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0816 10:02:05.293925    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 10:02:05.306704    3758 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 10:02:05.320000    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 10:02:05.330230    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 10:02:05.340255    3758 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0816 10:02:05.369098    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 10:02:05.379374    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 10:02:05.395120    3758 ssh_runner.go:195] Run: which cri-dockerd
	I0816 10:02:05.397921    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0816 10:02:05.404866    3758 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0816 10:02:05.417935    3758 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0816 10:02:05.514793    3758 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0816 10:02:05.626208    3758 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0816 10:02:05.626292    3758 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0816 10:02:05.640317    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:05.737978    3758 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0816 10:02:08.105430    3758 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.367458496s)
	I0816 10:02:08.105491    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0816 10:02:08.115970    3758 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0816 10:02:08.129325    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0816 10:02:08.140312    3758 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0816 10:02:08.237628    3758 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0816 10:02:08.340285    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:08.436168    3758 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0816 10:02:08.457808    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0816 10:02:08.468314    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:08.561830    3758 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0816 10:02:08.620080    3758 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0816 10:02:08.620164    3758 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0816 10:02:08.624716    3758 start.go:563] Will wait 60s for crictl version
	I0816 10:02:08.624764    3758 ssh_runner.go:195] Run: which crictl
	I0816 10:02:08.627806    3758 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 10:02:08.660437    3758 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0816 10:02:08.660505    3758 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0816 10:02:08.678073    3758 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0816 10:02:08.722165    3758 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0816 10:02:08.722217    3758 main.go:141] libmachine: (ha-286000) Calling .GetIP
	I0816 10:02:08.722605    3758 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0816 10:02:08.727140    3758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 10:02:08.736769    3758 kubeadm.go:883] updating cluster {Name:ha-286000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:ha-286000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 10:02:08.736830    3758 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 10:02:08.736887    3758 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0816 10:02:08.748223    3758 docker.go:685] Got preloaded images: 
	I0816 10:02:08.748236    3758 docker.go:691] registry.k8s.io/kube-apiserver:v1.31.0 wasn't preloaded
	I0816 10:02:08.748294    3758 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0816 10:02:08.755601    3758 ssh_runner.go:195] Run: which lz4
	I0816 10:02:08.758510    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0816 10:02:08.758630    3758 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 10:02:08.761594    3758 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 10:02:08.761610    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (342554258 bytes)
	I0816 10:02:09.771496    3758 docker.go:649] duration metric: took 1.012922149s to copy over tarball
	I0816 10:02:09.771559    3758 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 10:02:12.050040    3758 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.27849665s)
	I0816 10:02:12.050054    3758 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 10:02:12.075438    3758 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0816 10:02:12.083331    3758 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0816 10:02:12.097261    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:12.204348    3758 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0816 10:02:14.516272    3758 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.311936705s)
	I0816 10:02:14.516363    3758 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0816 10:02:14.529500    3758 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0816 10:02:14.529519    3758 cache_images.go:84] Images are preloaded, skipping loading
	I0816 10:02:14.529530    3758 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.0 docker true true} ...
	I0816 10:02:14.529607    3758 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-286000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-286000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 10:02:14.529674    3758 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0816 10:02:14.568093    3758 cni.go:84] Creating CNI manager for ""
	I0816 10:02:14.568107    3758 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0816 10:02:14.568120    3758 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 10:02:14.568134    3758 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-286000 NodeName:ha-286000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 10:02:14.568222    3758 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-286000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 10:02:14.568237    3758 kube-vip.go:115] generating kube-vip config ...
	I0816 10:02:14.568286    3758 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0816 10:02:14.580706    3758 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0816 10:02:14.580778    3758 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0816 10:02:14.580832    3758 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 10:02:14.588124    3758 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 10:02:14.588174    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0816 10:02:14.595283    3758 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0816 10:02:14.608721    3758 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 10:02:14.622036    3758 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0816 10:02:14.635525    3758 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1446 bytes)
	I0816 10:02:14.648868    3758 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0816 10:02:14.651748    3758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 10:02:14.661305    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:14.752561    3758 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 10:02:14.768967    3758 certs.go:68] Setting up /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000 for IP: 192.169.0.5
	I0816 10:02:14.768980    3758 certs.go:194] generating shared ca certs ...
	I0816 10:02:14.768991    3758 certs.go:226] acquiring lock for ca certs: {Name:mka549fbc467849834d2267da204028c49d88a3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:14.769183    3758 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key
	I0816 10:02:14.769258    3758 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key
	I0816 10:02:14.769269    3758 certs.go:256] generating profile certs ...
	I0816 10:02:14.769315    3758 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.key
	I0816 10:02:14.769328    3758 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.crt with IP's: []
	I0816 10:02:14.849573    3758 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.crt ...
	I0816 10:02:14.849589    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.crt: {Name:mk9c6a2b5871e8d33f12e0a58268b028ea37ab4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:14.849911    3758 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.key ...
	I0816 10:02:14.849919    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.key: {Name:mk7d8ed264e583d7ced6b16b60c72c11b15a1f22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:14.850137    3758 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.af99fd6a
	I0816 10:02:14.850151    3758 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.af99fd6a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.254]
	I0816 10:02:14.968129    3758 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.af99fd6a ...
	I0816 10:02:14.968143    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.af99fd6a: {Name:mk3b8945a16852dca8274ec1ab3a9f2eade80831 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:14.968464    3758 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.af99fd6a ...
	I0816 10:02:14.968473    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.af99fd6a: {Name:mk90fcce3348a214c4733fe5a1fb806faff7773f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:14.968717    3758 certs.go:381] copying /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.af99fd6a -> /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt
	I0816 10:02:14.968940    3758 certs.go:385] copying /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.af99fd6a -> /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key
	I0816 10:02:14.969171    3758 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key
	I0816 10:02:14.969185    3758 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.crt with IP's: []
	I0816 10:02:15.029329    3758 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.crt ...
	I0816 10:02:15.029338    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.crt: {Name:mk70aaa3903fb58df2124d779dbea07aa95769f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:15.029604    3758 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key ...
	I0816 10:02:15.029611    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key: {Name:mka3a85354927e8da31f9dfc67f92dea3c77ca65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:15.029850    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0816 10:02:15.029876    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0816 10:02:15.029898    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0816 10:02:15.029940    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0816 10:02:15.029958    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0816 10:02:15.029990    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0816 10:02:15.030008    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0816 10:02:15.030025    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0816 10:02:15.030118    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem (1338 bytes)
	W0816 10:02:15.030163    3758 certs.go:480] ignoring /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831_empty.pem, impossibly tiny 0 bytes
	I0816 10:02:15.030171    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 10:02:15.030240    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem (1078 bytes)
	I0816 10:02:15.030268    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem (1123 bytes)
	I0816 10:02:15.030354    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem (1679 bytes)
	I0816 10:02:15.030467    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem (1708 bytes)
	I0816 10:02:15.030499    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:02:15.030525    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem -> /usr/share/ca-certificates/1831.pem
	I0816 10:02:15.030542    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> /usr/share/ca-certificates/18312.pem
	I0816 10:02:15.031030    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 10:02:15.051618    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0816 10:02:15.071318    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 10:02:15.091017    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 10:02:15.110497    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0816 10:02:15.131033    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 10:02:15.150747    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 10:02:15.170186    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 10:02:15.189808    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 10:02:15.209868    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem --> /usr/share/ca-certificates/1831.pem (1338 bytes)
	I0816 10:02:15.229378    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem --> /usr/share/ca-certificates/18312.pem (1708 bytes)
	I0816 10:02:15.248667    3758 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 10:02:15.262035    3758 ssh_runner.go:195] Run: openssl version
	I0816 10:02:15.266135    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 10:02:15.275959    3758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:02:15.279367    3758 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 16:48 /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:02:15.279403    3758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:02:15.283611    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 10:02:15.291872    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1831.pem && ln -fs /usr/share/ca-certificates/1831.pem /etc/ssl/certs/1831.pem"
	I0816 10:02:15.299890    3758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1831.pem
	I0816 10:02:15.303179    3758 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 16:57 /usr/share/ca-certificates/1831.pem
	I0816 10:02:15.303212    3758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1831.pem
	I0816 10:02:15.307423    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1831.pem /etc/ssl/certs/51391683.0"
	I0816 10:02:15.315741    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18312.pem && ln -fs /usr/share/ca-certificates/18312.pem /etc/ssl/certs/18312.pem"
	I0816 10:02:15.324088    3758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18312.pem
	I0816 10:02:15.327441    3758 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 16:57 /usr/share/ca-certificates/18312.pem
	I0816 10:02:15.327479    3758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18312.pem
	I0816 10:02:15.331723    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18312.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 10:02:15.339921    3758 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 10:02:15.343061    3758 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0816 10:02:15.343109    3758 kubeadm.go:392] StartCluster: {Name:ha-286000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:ha-286000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 10:02:15.343194    3758 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0816 10:02:15.356016    3758 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 10:02:15.363405    3758 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 10:02:15.370635    3758 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 10:02:15.377806    3758 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 10:02:15.377819    3758 kubeadm.go:157] found existing configuration files:
	
	I0816 10:02:15.377855    3758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 10:02:15.384868    3758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 10:02:15.384903    3758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 10:02:15.392001    3758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 10:02:15.398935    3758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 10:02:15.398971    3758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 10:02:15.406181    3758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 10:02:15.413108    3758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 10:02:15.413142    3758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 10:02:15.422603    3758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 10:02:15.435692    3758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 10:02:15.435749    3758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 10:02:15.450920    3758 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 10:02:15.521417    3758 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0816 10:02:15.521501    3758 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 10:02:15.590843    3758 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 10:02:15.590923    3758 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 10:02:15.591008    3758 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0816 10:02:15.600874    3758 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 10:02:15.632377    3758 out.go:235]   - Generating certificates and keys ...
	I0816 10:02:15.632427    3758 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 10:02:15.632475    3758 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 10:02:15.791885    3758 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0816 10:02:15.852803    3758 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0816 10:02:15.961982    3758 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0816 10:02:16.146557    3758 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0816 10:02:16.493401    3758 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0816 10:02:16.493556    3758 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-286000 localhost] and IPs [192.169.0.5 127.0.0.1 ::1]
	I0816 10:02:16.704832    3758 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0816 10:02:16.705108    3758 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-286000 localhost] and IPs [192.169.0.5 127.0.0.1 ::1]
	I0816 10:02:16.922818    3758 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0816 10:02:17.082810    3758 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0816 10:02:17.393939    3758 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0816 10:02:17.394070    3758 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 10:02:17.465159    3758 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 10:02:17.731984    3758 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0816 10:02:18.019833    3758 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 10:02:18.164168    3758 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 10:02:18.273756    3758 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 10:02:18.274215    3758 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 10:02:18.275949    3758 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 10:02:18.300490    3758 out.go:235]   - Booting up control plane ...
	I0816 10:02:18.300569    3758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 10:02:18.300638    3758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 10:02:18.300693    3758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 10:02:18.300780    3758 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 10:02:18.300850    3758 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 10:02:18.300879    3758 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 10:02:18.405245    3758 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0816 10:02:18.405330    3758 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0816 10:02:18.909805    3758 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 504.760374ms
	I0816 10:02:18.909928    3758 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0816 10:02:24.844637    3758 kubeadm.go:310] [api-check] The API server is healthy after 5.938882642s
	I0816 10:02:24.854864    3758 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0816 10:02:24.862134    3758 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0816 10:02:24.890940    3758 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0816 10:02:24.891101    3758 kubeadm.go:310] [mark-control-plane] Marking the node ha-286000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0816 10:02:24.901441    3758 kubeadm.go:310] [bootstrap-token] Using token: 73merd.1elxantqs1p5mkiz
	I0816 10:02:24.939545    3758 out.go:235]   - Configuring RBAC rules ...
	I0816 10:02:24.939737    3758 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0816 10:02:24.944670    3758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0816 10:02:24.951393    3758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0816 10:02:24.953383    3758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0816 10:02:24.955899    3758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0816 10:02:24.958387    3758 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0816 10:02:25.251799    3758 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0816 10:02:25.676108    3758 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0816 10:02:26.249233    3758 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0816 10:02:26.249936    3758 kubeadm.go:310] 
	I0816 10:02:26.250001    3758 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0816 10:02:26.250014    3758 kubeadm.go:310] 
	I0816 10:02:26.250090    3758 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0816 10:02:26.250102    3758 kubeadm.go:310] 
	I0816 10:02:26.250122    3758 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0816 10:02:26.250175    3758 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0816 10:02:26.250221    3758 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0816 10:02:26.250229    3758 kubeadm.go:310] 
	I0816 10:02:26.250268    3758 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0816 10:02:26.250272    3758 kubeadm.go:310] 
	I0816 10:02:26.250305    3758 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0816 10:02:26.250313    3758 kubeadm.go:310] 
	I0816 10:02:26.250359    3758 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0816 10:02:26.250425    3758 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0816 10:02:26.250484    3758 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0816 10:02:26.250491    3758 kubeadm.go:310] 
	I0816 10:02:26.250569    3758 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0816 10:02:26.250648    3758 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0816 10:02:26.250656    3758 kubeadm.go:310] 
	I0816 10:02:26.250716    3758 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 73merd.1elxantqs1p5mkiz \
	I0816 10:02:26.250793    3758 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:995590413ea712c4f7db2cf849d28264ebc291e6cd53d741ce1d22c1daaa1602 \
	I0816 10:02:26.250811    3758 kubeadm.go:310] 	--control-plane 
	I0816 10:02:26.250817    3758 kubeadm.go:310] 
	I0816 10:02:26.250879    3758 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0816 10:02:26.250885    3758 kubeadm.go:310] 
	I0816 10:02:26.250947    3758 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 73merd.1elxantqs1p5mkiz \
	I0816 10:02:26.251036    3758 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:995590413ea712c4f7db2cf849d28264ebc291e6cd53d741ce1d22c1daaa1602 
	I0816 10:02:26.251297    3758 kubeadm.go:310] W0816 17:02:15.599099    1566 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 10:02:26.251521    3758 kubeadm.go:310] W0816 17:02:15.600578    1566 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 10:02:26.251608    3758 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 10:02:26.251620    3758 cni.go:84] Creating CNI manager for ""
	I0816 10:02:26.251624    3758 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0816 10:02:26.289324    3758 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0816 10:02:26.363318    3758 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0816 10:02:26.368971    3758 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0816 10:02:26.368983    3758 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0816 10:02:26.382855    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0816 10:02:26.629869    3758 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 10:02:26.629947    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-286000 minikube.k8s.io/updated_at=2024_08_16T10_02_26_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd minikube.k8s.io/name=ha-286000 minikube.k8s.io/primary=true
	I0816 10:02:26.629949    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 10:02:26.658712    3758 ops.go:34] apiserver oom_adj: -16
	I0816 10:02:26.756402    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 10:02:27.257667    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 10:02:27.757259    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 10:02:28.256864    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 10:02:28.757602    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 10:02:29.256802    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 10:02:29.757282    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 10:02:29.882342    3758 kubeadm.go:1113] duration metric: took 3.252511045s to wait for elevateKubeSystemPrivileges
	I0816 10:02:29.882365    3758 kubeadm.go:394] duration metric: took 14.539506386s to StartCluster
	I0816 10:02:29.882380    3758 settings.go:142] acquiring lock: {Name:mk60e377244eac756871b74b6bd45115654652a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:29.882471    3758 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19461-1276/kubeconfig
	I0816 10:02:29.882922    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/kubeconfig: {Name:mk7f4491c8ec5cf96c96a5a1a5dbfba4301394a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:29.883167    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0816 10:02:29.883170    3758 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:02:29.883182    3758 start.go:241] waiting for startup goroutines ...
	I0816 10:02:29.883204    3758 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 10:02:29.883238    3758 addons.go:69] Setting storage-provisioner=true in profile "ha-286000"
	I0816 10:02:29.883244    3758 addons.go:69] Setting default-storageclass=true in profile "ha-286000"
	I0816 10:02:29.883261    3758 addons.go:234] Setting addon storage-provisioner=true in "ha-286000"
	I0816 10:02:29.883278    3758 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-286000"
	I0816 10:02:29.883280    3758 host.go:66] Checking if "ha-286000" exists ...
	I0816 10:02:29.883305    3758 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:02:29.883558    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:02:29.883573    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:02:29.883575    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:02:29.883598    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:02:29.892969    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51028
	I0816 10:02:29.893238    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51030
	I0816 10:02:29.893356    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:02:29.893605    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:02:29.893730    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:02:29.893741    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:02:29.893938    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:02:29.893951    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:02:29.893976    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:02:29.894142    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:02:29.894254    3758 main.go:141] libmachine: (ha-286000) Calling .GetState
	I0816 10:02:29.894338    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:29.894365    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:02:29.894397    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:02:29.894424    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:02:29.896690    3758 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19461-1276/kubeconfig
	I0816 10:02:29.896968    3758 kapi.go:59] client config for ha-286000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.key", CAFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xa202f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0816 10:02:29.897373    3758 cert_rotation.go:140] Starting client certificate rotation controller
	I0816 10:02:29.897530    3758 addons.go:234] Setting addon default-storageclass=true in "ha-286000"
	I0816 10:02:29.897550    3758 host.go:66] Checking if "ha-286000" exists ...
	I0816 10:02:29.897771    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:02:29.897789    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:02:29.903669    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51032
	I0816 10:02:29.904077    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:02:29.904431    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:02:29.904451    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:02:29.904736    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:02:29.904889    3758 main.go:141] libmachine: (ha-286000) Calling .GetState
	I0816 10:02:29.904993    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:29.905054    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:02:29.906086    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:29.906730    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51034
	I0816 10:02:29.907081    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:02:29.907463    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:02:29.907491    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:02:29.907694    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:02:29.908045    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:02:29.908061    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:02:29.917300    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51036
	I0816 10:02:29.917667    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:02:29.918020    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:02:29.918034    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:02:29.918277    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:02:29.918384    3758 main.go:141] libmachine: (ha-286000) Calling .GetState
	I0816 10:02:29.918483    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:29.918572    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:02:29.919534    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:29.919671    3758 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 10:02:29.919680    3758 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 10:02:29.919688    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:29.919789    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:29.919896    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:29.919990    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:29.920072    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:02:29.928735    3758 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 10:02:29.965711    3758 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 10:02:29.965725    3758 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 10:02:29.965744    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:29.965926    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:29.966043    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:29.966151    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:29.966280    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:02:30.033245    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.169.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0816 10:02:30.037035    3758 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 10:02:30.090040    3758 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 10:02:30.396267    3758 start.go:971] {"host.minikube.internal": 192.169.0.1} host record injected into CoreDNS's ConfigMap
	I0816 10:02:30.396311    3758 main.go:141] libmachine: Making call to close driver server
	I0816 10:02:30.396323    3758 main.go:141] libmachine: (ha-286000) Calling .Close
	I0816 10:02:30.396482    3758 main.go:141] libmachine: Successfully made call to close driver server
	I0816 10:02:30.396488    3758 main.go:141] libmachine: (ha-286000) DBG | Closing plugin on server side
	I0816 10:02:30.396492    3758 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 10:02:30.396501    3758 main.go:141] libmachine: Making call to close driver server
	I0816 10:02:30.396507    3758 main.go:141] libmachine: (ha-286000) Calling .Close
	I0816 10:02:30.396655    3758 main.go:141] libmachine: Successfully made call to close driver server
	I0816 10:02:30.396662    3758 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 10:02:30.396713    3758 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0816 10:02:30.396725    3758 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0816 10:02:30.396804    3758 round_trippers.go:463] GET https://192.169.0.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0816 10:02:30.396810    3758 round_trippers.go:469] Request Headers:
	I0816 10:02:30.396817    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:02:30.396822    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:02:30.402286    3758 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0816 10:02:30.402706    3758 round_trippers.go:463] PUT https://192.169.0.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0816 10:02:30.402713    3758 round_trippers.go:469] Request Headers:
	I0816 10:02:30.402718    3758 round_trippers.go:473]     Content-Type: application/json
	I0816 10:02:30.402721    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:02:30.402723    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:02:30.404290    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:02:30.404403    3758 main.go:141] libmachine: Making call to close driver server
	I0816 10:02:30.404416    3758 main.go:141] libmachine: (ha-286000) Calling .Close
	I0816 10:02:30.404565    3758 main.go:141] libmachine: Successfully made call to close driver server
	I0816 10:02:30.404575    3758 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 10:02:30.404586    3758 main.go:141] libmachine: (ha-286000) DBG | Closing plugin on server side
	I0816 10:02:30.497806    3758 main.go:141] libmachine: Making call to close driver server
	I0816 10:02:30.497818    3758 main.go:141] libmachine: (ha-286000) Calling .Close
	I0816 10:02:30.498006    3758 main.go:141] libmachine: Successfully made call to close driver server
	I0816 10:02:30.498011    3758 main.go:141] libmachine: (ha-286000) DBG | Closing plugin on server side
	I0816 10:02:30.498016    3758 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 10:02:30.498023    3758 main.go:141] libmachine: Making call to close driver server
	I0816 10:02:30.498040    3758 main.go:141] libmachine: (ha-286000) Calling .Close
	I0816 10:02:30.498160    3758 main.go:141] libmachine: Successfully made call to close driver server
	I0816 10:02:30.498168    3758 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 10:02:30.498179    3758 main.go:141] libmachine: (ha-286000) DBG | Closing plugin on server side
	I0816 10:02:30.538607    3758 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0816 10:02:30.596522    3758 addons.go:510] duration metric: took 713.336883ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0816 10:02:30.596578    3758 start.go:246] waiting for cluster config update ...
	I0816 10:02:30.596599    3758 start.go:255] writing updated cluster config ...
	I0816 10:02:30.634683    3758 out.go:201] 
	I0816 10:02:30.655754    3758 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:02:30.655861    3758 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:02:30.677634    3758 out.go:177] * Starting "ha-286000-m02" control-plane node in "ha-286000" cluster
	I0816 10:02:30.737443    3758 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 10:02:30.737475    3758 cache.go:56] Caching tarball of preloaded images
	I0816 10:02:30.737660    3758 preload.go:172] Found /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0816 10:02:30.737679    3758 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 10:02:30.737771    3758 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:02:30.738414    3758 start.go:360] acquireMachinesLock for ha-286000-m02: {Name:mk0510f0432b1e24fd07d899155e85dff8756d5a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:02:30.738494    3758 start.go:364] duration metric: took 63.585µs to acquireMachinesLock for "ha-286000-m02"
	I0816 10:02:30.738517    3758 start.go:93] Provisioning new machine with config: &{Name:ha-286000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:ha-286000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:02:30.738590    3758 start.go:125] createHost starting for "m02" (driver="hyperkit")
	I0816 10:02:30.760743    3758 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0816 10:02:30.760905    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:02:30.760935    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:02:30.771028    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51041
	I0816 10:02:30.771383    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:02:30.771745    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:02:30.771760    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:02:30.771967    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:02:30.772077    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetMachineName
	I0816 10:02:30.772157    3758 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:02:30.772261    3758 start.go:159] libmachine.API.Create for "ha-286000" (driver="hyperkit")
	I0816 10:02:30.772276    3758 client.go:168] LocalClient.Create starting
	I0816 10:02:30.772303    3758 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem
	I0816 10:02:30.772358    3758 main.go:141] libmachine: Decoding PEM data...
	I0816 10:02:30.772368    3758 main.go:141] libmachine: Parsing certificate...
	I0816 10:02:30.772413    3758 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem
	I0816 10:02:30.772450    3758 main.go:141] libmachine: Decoding PEM data...
	I0816 10:02:30.772460    3758 main.go:141] libmachine: Parsing certificate...
	I0816 10:02:30.772472    3758 main.go:141] libmachine: Running pre-create checks...
	I0816 10:02:30.772477    3758 main.go:141] libmachine: (ha-286000-m02) Calling .PreCreateCheck
	I0816 10:02:30.772545    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:30.772568    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetConfigRaw
	I0816 10:02:30.781772    3758 main.go:141] libmachine: Creating machine...
	I0816 10:02:30.781789    3758 main.go:141] libmachine: (ha-286000-m02) Calling .Create
	I0816 10:02:30.781943    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:30.782181    3758 main.go:141] libmachine: (ha-286000-m02) DBG | I0816 10:02:30.781934    3805 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19461-1276/.minikube
	I0816 10:02:30.782269    3758 main.go:141] libmachine: (ha-286000-m02) Downloading /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19461-1276/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0816 10:02:30.982082    3758 main.go:141] libmachine: (ha-286000-m02) DBG | I0816 10:02:30.982020    3805 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/id_rsa...
	I0816 10:02:31.266609    3758 main.go:141] libmachine: (ha-286000-m02) DBG | I0816 10:02:31.266514    3805 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/ha-286000-m02.rawdisk...
	I0816 10:02:31.266629    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Writing magic tar header
	I0816 10:02:31.266638    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Writing SSH key tar header
	I0816 10:02:31.267382    3758 main.go:141] libmachine: (ha-286000-m02) DBG | I0816 10:02:31.267242    3805 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02 ...
	I0816 10:02:31.642864    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:31.642889    3758 main.go:141] libmachine: (ha-286000-m02) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/hyperkit.pid
	I0816 10:02:31.642912    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Using UUID f7630301-2bc3-4935-a751-ac72999da031
	I0816 10:02:31.669349    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Generated MAC 72:69:8f:11:68:1d
	I0816 10:02:31.669369    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000
	I0816 10:02:31.669397    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"f7630301-2bc3-4935-a751-ac72999da031", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 10:02:31.669426    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"f7630301-2bc3-4935-a751-ac72999da031", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 10:02:31.669455    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "f7630301-2bc3-4935-a751-ac72999da031", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/ha-286000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/bzimage,/Users/jenkins/minikube-integration/19461-1276/.minikube/machine
s/ha-286000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000"}
	I0816 10:02:31.669484    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U f7630301-2bc3-4935-a751-ac72999da031 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/ha-286000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/bzimage,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000"
	I0816 10:02:31.669499    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0816 10:02:31.672492    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 DEBUG: hyperkit: Pid is 3806
	I0816 10:02:31.672957    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Attempt 0
	I0816 10:02:31.673000    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:31.673054    3758 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid from json: 3806
	I0816 10:02:31.674014    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Searching for 72:69:8f:11:68:1d in /var/db/dhcpd_leases ...
	I0816 10:02:31.674086    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0816 10:02:31.674098    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:02:31.674120    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:02:31.674129    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:02:31.674137    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:02:31.680025    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0816 10:02:31.688109    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0816 10:02:31.688968    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:02:31.688993    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:02:31.689006    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:02:31.689016    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:02:32.077133    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:32 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0816 10:02:32.077153    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:32 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0816 10:02:32.191789    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:02:32.191806    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:02:32.191814    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:02:32.191825    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:02:32.192676    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:32 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0816 10:02:32.192686    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:32 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0816 10:02:33.675165    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Attempt 1
	I0816 10:02:33.675183    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:33.675297    3758 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid from json: 3806
	I0816 10:02:33.676089    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Searching for 72:69:8f:11:68:1d in /var/db/dhcpd_leases ...
	I0816 10:02:33.676141    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0816 10:02:33.676153    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:02:33.676162    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:02:33.676169    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:02:33.676193    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:02:35.676266    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Attempt 2
	I0816 10:02:35.676285    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:35.676360    3758 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid from json: 3806
	I0816 10:02:35.677182    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Searching for 72:69:8f:11:68:1d in /var/db/dhcpd_leases ...
	I0816 10:02:35.677237    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0816 10:02:35.677248    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:02:35.677262    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:02:35.677270    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:02:35.677281    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:02:37.678515    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Attempt 3
	I0816 10:02:37.678532    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:37.678572    3758 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid from json: 3806
	I0816 10:02:37.679405    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Searching for 72:69:8f:11:68:1d in /var/db/dhcpd_leases ...
	I0816 10:02:37.679441    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0816 10:02:37.679451    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:02:37.679461    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:02:37.679468    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:02:37.679483    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:02:37.782496    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:37 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0816 10:02:37.782531    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:37 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0816 10:02:37.782540    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:37 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0816 10:02:37.806630    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:37 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0816 10:02:39.680161    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Attempt 4
	I0816 10:02:39.680178    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:39.680273    3758 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid from json: 3806
	I0816 10:02:39.681064    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Searching for 72:69:8f:11:68:1d in /var/db/dhcpd_leases ...
	I0816 10:02:39.681112    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0816 10:02:39.681136    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:02:39.681155    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:02:39.681170    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:02:39.681181    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:02:41.681380    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Attempt 5
	I0816 10:02:41.681414    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:41.681563    3758 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid from json: 3806
	I0816 10:02:41.682343    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Searching for 72:69:8f:11:68:1d in /var/db/dhcpd_leases ...
	I0816 10:02:41.682392    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0816 10:02:41.682406    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0d7b0}
	I0816 10:02:41.682415    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Found match: 72:69:8f:11:68:1d
	I0816 10:02:41.682421    3758 main.go:141] libmachine: (ha-286000-m02) DBG | IP: 192.169.0.6
	I0816 10:02:41.682475    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetConfigRaw
	I0816 10:02:41.683135    3758 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:02:41.683257    3758 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:02:41.683358    3758 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0816 10:02:41.683367    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetState
	I0816 10:02:41.683478    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:41.683537    3758 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid from json: 3806
	I0816 10:02:41.684414    3758 main.go:141] libmachine: Detecting operating system of created instance...
	I0816 10:02:41.684423    3758 main.go:141] libmachine: Waiting for SSH to be available...
	I0816 10:02:41.684427    3758 main.go:141] libmachine: Getting to WaitForSSH function...
	I0816 10:02:41.684431    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:41.684566    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:41.684692    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:41.684809    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:41.684943    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:41.685095    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:41.685300    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:02:41.685308    3758 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0816 10:02:42.746559    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 10:02:42.746571    3758 main.go:141] libmachine: Detecting the provisioner...
	I0816 10:02:42.746577    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:42.746714    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:42.746824    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:42.746914    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:42.746988    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:42.747112    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:42.747269    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:02:42.747277    3758 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0816 10:02:42.811630    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0816 10:02:42.811666    3758 main.go:141] libmachine: found compatible host: buildroot
	I0816 10:02:42.811672    3758 main.go:141] libmachine: Provisioning with buildroot...
	I0816 10:02:42.811681    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetMachineName
	I0816 10:02:42.811816    3758 buildroot.go:166] provisioning hostname "ha-286000-m02"
	I0816 10:02:42.811828    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetMachineName
	I0816 10:02:42.811943    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:42.812045    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:42.812136    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:42.812274    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:42.812376    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:42.812513    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:42.812673    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:02:42.812682    3758 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-286000-m02 && echo "ha-286000-m02" | sudo tee /etc/hostname
	I0816 10:02:42.887531    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-286000-m02
	
	I0816 10:02:42.887545    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:42.887676    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:42.887769    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:42.887867    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:42.887953    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:42.888075    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:42.888214    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:02:42.888225    3758 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-286000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-286000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-286000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 10:02:42.959625    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 10:02:42.959641    3758 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19461-1276/.minikube CaCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19461-1276/.minikube}
	I0816 10:02:42.959649    3758 buildroot.go:174] setting up certificates
	I0816 10:02:42.959661    3758 provision.go:84] configureAuth start
	I0816 10:02:42.959666    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetMachineName
	I0816 10:02:42.959803    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetIP
	I0816 10:02:42.959899    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:42.959980    3758 provision.go:143] copyHostCerts
	I0816 10:02:42.960007    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem
	I0816 10:02:42.960055    3758 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem, removing ...
	I0816 10:02:42.960061    3758 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem
	I0816 10:02:42.960954    3758 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem (1078 bytes)
	I0816 10:02:42.961160    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem
	I0816 10:02:42.961190    3758 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem, removing ...
	I0816 10:02:42.961195    3758 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem
	I0816 10:02:42.961273    3758 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem (1123 bytes)
	I0816 10:02:42.961439    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem
	I0816 10:02:42.961509    3758 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem, removing ...
	I0816 10:02:42.961515    3758 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem
	I0816 10:02:42.961594    3758 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem (1679 bytes)
	I0816 10:02:42.961746    3758 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem org=jenkins.ha-286000-m02 san=[127.0.0.1 192.169.0.6 ha-286000-m02 localhost minikube]
	I0816 10:02:43.093511    3758 provision.go:177] copyRemoteCerts
	I0816 10:02:43.093556    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 10:02:43.093572    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:43.093719    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:43.093818    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:43.093917    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:43.094005    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/id_rsa Username:docker}
	I0816 10:02:43.132229    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0816 10:02:43.132299    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0816 10:02:43.151814    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0816 10:02:43.151873    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0816 10:02:43.171464    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0816 10:02:43.171532    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 10:02:43.191045    3758 provision.go:87] duration metric: took 231.381757ms to configureAuth
	I0816 10:02:43.191057    3758 buildroot.go:189] setting minikube options for container-runtime
	I0816 10:02:43.191187    3758 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:02:43.191200    3758 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:02:43.191321    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:43.191410    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:43.191497    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:43.191580    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:43.191658    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:43.191759    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:43.191891    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:02:43.191898    3758 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0816 10:02:43.256733    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0816 10:02:43.256744    3758 buildroot.go:70] root file system type: tmpfs
	I0816 10:02:43.256823    3758 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0816 10:02:43.256835    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:43.256961    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:43.257048    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:43.257142    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:43.257220    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:43.257364    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:43.257505    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:02:43.257553    3758 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0816 10:02:43.330709    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0816 10:02:43.330729    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:43.330874    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:43.330977    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:43.331073    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:43.331167    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:43.331293    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:43.331440    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:02:43.331451    3758 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0816 10:02:44.884634    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0816 10:02:44.884648    3758 main.go:141] libmachine: Checking connection to Docker...
	I0816 10:02:44.884655    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetURL
	I0816 10:02:44.884793    3758 main.go:141] libmachine: Docker is up and running!
	I0816 10:02:44.884799    3758 main.go:141] libmachine: Reticulating splines...
	I0816 10:02:44.884804    3758 client.go:171] duration metric: took 14.112778669s to LocalClient.Create
	I0816 10:02:44.884820    3758 start.go:167] duration metric: took 14.112810851s to libmachine.API.Create "ha-286000"
	I0816 10:02:44.884826    3758 start.go:293] postStartSetup for "ha-286000-m02" (driver="hyperkit")
	I0816 10:02:44.884832    3758 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 10:02:44.884843    3758 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:02:44.884991    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 10:02:44.885004    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:44.885101    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:44.885204    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:44.885335    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:44.885428    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/id_rsa Username:docker}
	I0816 10:02:44.925701    3758 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 10:02:44.929599    3758 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 10:02:44.929613    3758 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19461-1276/.minikube/addons for local assets ...
	I0816 10:02:44.929722    3758 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19461-1276/.minikube/files for local assets ...
	I0816 10:02:44.929908    3758 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> 18312.pem in /etc/ssl/certs
	I0816 10:02:44.929914    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> /etc/ssl/certs/18312.pem
	I0816 10:02:44.930119    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 10:02:44.940484    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem --> /etc/ssl/certs/18312.pem (1708 bytes)
	I0816 10:02:44.973200    3758 start.go:296] duration metric: took 88.36731ms for postStartSetup
	I0816 10:02:44.973230    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetConfigRaw
	I0816 10:02:44.973848    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetIP
	I0816 10:02:44.973996    3758 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:02:44.974351    3758 start.go:128] duration metric: took 14.23599979s to createHost
	I0816 10:02:44.974366    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:44.974448    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:44.974568    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:44.974660    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:44.974744    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:44.974844    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:44.974966    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:02:44.974973    3758 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 10:02:45.036267    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723827764.932365297
	
	I0816 10:02:45.036285    3758 fix.go:216] guest clock: 1723827764.932365297
	I0816 10:02:45.036291    3758 fix.go:229] Guest: 2024-08-16 10:02:44.932365297 -0700 PDT Remote: 2024-08-16 10:02:44.97436 -0700 PDT m=+56.899083401 (delta=-41.994703ms)
	I0816 10:02:45.036301    3758 fix.go:200] guest clock delta is within tolerance: -41.994703ms
	I0816 10:02:45.036306    3758 start.go:83] releasing machines lock for "ha-286000-m02", held for 14.298063452s
	I0816 10:02:45.036338    3758 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:02:45.036468    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetIP
	I0816 10:02:45.062334    3758 out.go:177] * Found network options:
	I0816 10:02:45.105996    3758 out.go:177]   - NO_PROXY=192.169.0.5
	W0816 10:02:45.128174    3758 proxy.go:119] fail to check proxy env: Error ip not in block
	I0816 10:02:45.128216    3758 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:02:45.128829    3758 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:02:45.128969    3758 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:02:45.129060    3758 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 10:02:45.129088    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	W0816 10:02:45.129115    3758 proxy.go:119] fail to check proxy env: Error ip not in block
	I0816 10:02:45.129222    3758 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0816 10:02:45.129232    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:45.129238    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:45.129370    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:45.129393    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:45.129500    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:45.129515    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:45.129631    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/id_rsa Username:docker}
	I0816 10:02:45.129647    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:45.129727    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/id_rsa Username:docker}
	W0816 10:02:45.164265    3758 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 10:02:45.164324    3758 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 10:02:45.211468    3758 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 10:02:45.211489    3758 start.go:495] detecting cgroup driver to use...
	I0816 10:02:45.211595    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 10:02:45.228144    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0816 10:02:45.237310    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0816 10:02:45.246272    3758 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0816 10:02:45.246316    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0816 10:02:45.255987    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 10:02:45.265400    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0816 10:02:45.274661    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 10:02:45.283628    3758 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 10:02:45.292648    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0816 10:02:45.302207    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0816 10:02:45.311314    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0816 10:02:45.320414    3758 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 10:02:45.328532    3758 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 10:02:45.337229    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:45.444954    3758 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0816 10:02:45.464569    3758 start.go:495] detecting cgroup driver to use...
	I0816 10:02:45.464652    3758 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0816 10:02:45.488292    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 10:02:45.500162    3758 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 10:02:45.561835    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 10:02:45.572027    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 10:02:45.582122    3758 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0816 10:02:45.603011    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 10:02:45.613488    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 10:02:45.628434    3758 ssh_runner.go:195] Run: which cri-dockerd
	I0816 10:02:45.631358    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0816 10:02:45.638496    3758 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0816 10:02:45.652676    3758 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0816 10:02:45.755971    3758 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0816 10:02:45.860533    3758 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0816 10:02:45.860559    3758 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0816 10:02:45.874570    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:45.976095    3758 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0816 10:02:48.459358    3758 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.483288325s)
	I0816 10:02:48.459417    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0816 10:02:48.469882    3758 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0816 10:02:48.484429    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0816 10:02:48.495038    3758 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0816 10:02:48.597129    3758 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0816 10:02:48.691412    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:48.797592    3758 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0816 10:02:48.812075    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0816 10:02:48.823307    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:48.918652    3758 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0816 10:02:48.980191    3758 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0816 10:02:48.980257    3758 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0816 10:02:48.984954    3758 start.go:563] Will wait 60s for crictl version
	I0816 10:02:48.985011    3758 ssh_runner.go:195] Run: which crictl
	I0816 10:02:48.988185    3758 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 10:02:49.015262    3758 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0816 10:02:49.015344    3758 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0816 10:02:49.032341    3758 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0816 10:02:49.078128    3758 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0816 10:02:49.137645    3758 out.go:177]   - env NO_PROXY=192.169.0.5
	I0816 10:02:49.176661    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetIP
	I0816 10:02:49.177129    3758 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0816 10:02:49.181778    3758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 10:02:49.191522    3758 mustload.go:65] Loading cluster: ha-286000
	I0816 10:02:49.191665    3758 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:02:49.191885    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:02:49.191909    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:02:49.200721    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51064
	I0816 10:02:49.201069    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:02:49.201413    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:02:49.201424    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:02:49.201648    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:02:49.201776    3758 main.go:141] libmachine: (ha-286000) Calling .GetState
	I0816 10:02:49.201862    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:49.201960    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:02:49.202920    3758 host.go:66] Checking if "ha-286000" exists ...
	I0816 10:02:49.203188    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:02:49.203205    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:02:49.211926    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51066
	I0816 10:02:49.212258    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:02:49.212635    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:02:49.212652    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:02:49.212860    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:02:49.212967    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:49.213056    3758 certs.go:68] Setting up /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000 for IP: 192.169.0.6
	I0816 10:02:49.213061    3758 certs.go:194] generating shared ca certs ...
	I0816 10:02:49.213071    3758 certs.go:226] acquiring lock for ca certs: {Name:mka549fbc467849834d2267da204028c49d88a3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:49.213229    3758 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key
	I0816 10:02:49.213305    3758 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key
	I0816 10:02:49.213314    3758 certs.go:256] generating profile certs ...
	I0816 10:02:49.213406    3758 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.key
	I0816 10:02:49.213429    3758 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.5eb44438
	I0816 10:02:49.213443    3758 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.5eb44438 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.254]
	I0816 10:02:49.263058    3758 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.5eb44438 ...
	I0816 10:02:49.263077    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.5eb44438: {Name:mk266d5e842df442c104f85113e06d171d5a89ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:49.263395    3758 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.5eb44438 ...
	I0816 10:02:49.263404    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.5eb44438: {Name:mk1428912a4c1edaafe55f9bff302c19ad337785 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:49.263623    3758 certs.go:381] copying /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.5eb44438 -> /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt
	I0816 10:02:49.263846    3758 certs.go:385] copying /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.5eb44438 -> /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key
	I0816 10:02:49.264093    3758 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key
	I0816 10:02:49.264103    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0816 10:02:49.264125    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0816 10:02:49.264143    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0816 10:02:49.264163    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0816 10:02:49.264180    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0816 10:02:49.264199    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0816 10:02:49.264223    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0816 10:02:49.264242    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0816 10:02:49.264331    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem (1338 bytes)
	W0816 10:02:49.264378    3758 certs.go:480] ignoring /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831_empty.pem, impossibly tiny 0 bytes
	I0816 10:02:49.264386    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 10:02:49.264418    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem (1078 bytes)
	I0816 10:02:49.264449    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem (1123 bytes)
	I0816 10:02:49.264477    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem (1679 bytes)
	I0816 10:02:49.264545    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem (1708 bytes)
	I0816 10:02:49.264593    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> /usr/share/ca-certificates/18312.pem
	I0816 10:02:49.264614    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:02:49.264634    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem -> /usr/share/ca-certificates/1831.pem
	I0816 10:02:49.264663    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:49.264814    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:49.264897    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:49.265014    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:49.265108    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:02:49.296192    3758 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0816 10:02:49.299577    3758 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0816 10:02:49.312765    3758 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0816 10:02:49.316551    3758 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0816 10:02:49.332167    3758 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0816 10:02:49.336199    3758 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0816 10:02:49.349166    3758 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0816 10:02:49.354242    3758 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0816 10:02:49.364119    3758 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0816 10:02:49.367349    3758 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0816 10:02:49.375423    3758 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0816 10:02:49.378717    3758 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0816 10:02:49.386918    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 10:02:49.408036    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0816 10:02:49.428163    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 10:02:49.448952    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 10:02:49.468986    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0816 10:02:49.489674    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0816 10:02:49.509597    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 10:02:49.530175    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 10:02:49.550578    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem --> /usr/share/ca-certificates/18312.pem (1708 bytes)
	I0816 10:02:49.571266    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 10:02:49.590995    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem --> /usr/share/ca-certificates/1831.pem (1338 bytes)
	I0816 10:02:49.611766    3758 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0816 10:02:49.625222    3758 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0816 10:02:49.638852    3758 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0816 10:02:49.653497    3758 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0816 10:02:49.667098    3758 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0816 10:02:49.680935    3758 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0816 10:02:49.695437    3758 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0816 10:02:49.708684    3758 ssh_runner.go:195] Run: openssl version
	I0816 10:02:49.712979    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18312.pem && ln -fs /usr/share/ca-certificates/18312.pem /etc/ssl/certs/18312.pem"
	I0816 10:02:49.721565    3758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18312.pem
	I0816 10:02:49.725513    3758 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 16:57 /usr/share/ca-certificates/18312.pem
	I0816 10:02:49.725580    3758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18312.pem
	I0816 10:02:49.730364    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18312.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 10:02:49.739184    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 10:02:49.747494    3758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:02:49.750923    3758 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 16:48 /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:02:49.750955    3758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:02:49.755195    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 10:02:49.763703    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1831.pem && ln -fs /usr/share/ca-certificates/1831.pem /etc/ssl/certs/1831.pem"
	I0816 10:02:49.772914    3758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1831.pem
	I0816 10:02:49.776377    3758 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 16:57 /usr/share/ca-certificates/1831.pem
	I0816 10:02:49.776421    3758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1831.pem
	I0816 10:02:49.780731    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1831.pem /etc/ssl/certs/51391683.0"
	I0816 10:02:49.789078    3758 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 10:02:49.792242    3758 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0816 10:02:49.792277    3758 kubeadm.go:934] updating node {m02 192.169.0.6 8443 v1.31.0 docker true true} ...
	I0816 10:02:49.792332    3758 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-286000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-286000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 10:02:49.792349    3758 kube-vip.go:115] generating kube-vip config ...
	I0816 10:02:49.792381    3758 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0816 10:02:49.804469    3758 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0816 10:02:49.804511    3758 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0816 10:02:49.804567    3758 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 10:02:49.812921    3758 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0816 10:02:49.812983    3758 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0816 10:02:49.821031    3758 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256 -> /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubelet
	I0816 10:02:49.821035    3758 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256 -> /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubeadm
	I0816 10:02:49.821039    3758 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubectl
	I0816 10:02:51.911279    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0816 10:02:51.911374    3758 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0816 10:02:51.915115    3758 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0816 10:02:51.915150    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0816 10:02:52.537289    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0816 10:02:52.537383    3758 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0816 10:02:52.540898    3758 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0816 10:02:52.540929    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0816 10:02:53.267467    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 10:02:53.278589    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0816 10:02:53.278731    3758 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0816 10:02:53.282055    3758 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0816 10:02:53.282073    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0816 10:02:53.498714    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0816 10:02:53.506112    3758 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0816 10:02:53.519672    3758 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 10:02:53.533551    3758 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0816 10:02:53.548024    3758 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0816 10:02:53.551124    3758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 10:02:53.560470    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:53.659270    3758 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 10:02:53.674625    3758 host.go:66] Checking if "ha-286000" exists ...
	I0816 10:02:53.674900    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:02:53.674923    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:02:53.683891    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51093
	I0816 10:02:53.684256    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:02:53.684641    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:02:53.684658    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:02:53.684885    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:02:53.685003    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:53.685084    3758 start.go:317] joinCluster: &{Name:ha-286000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clu
sterName:ha-286000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpira
tion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 10:02:53.685155    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0816 10:02:53.685167    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:53.685248    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:53.685339    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:53.685423    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:53.685503    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:02:53.765911    3758 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:02:53.765972    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token mpmdnf.6rzc0wpb01e0t6lz --discovery-token-ca-cert-hash sha256:995590413ea712c4f7db2cf849d28264ebc291e6cd53d741ce1d22c1daaa1602 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-286000-m02 --control-plane --apiserver-advertise-address=192.169.0.6 --apiserver-bind-port=8443"
	I0816 10:03:22.070391    3758 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token mpmdnf.6rzc0wpb01e0t6lz --discovery-token-ca-cert-hash sha256:995590413ea712c4f7db2cf849d28264ebc291e6cd53d741ce1d22c1daaa1602 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-286000-m02 --control-plane --apiserver-advertise-address=192.169.0.6 --apiserver-bind-port=8443": (28.304928658s)
	I0816 10:03:22.070414    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0816 10:03:22.427732    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-286000-m02 minikube.k8s.io/updated_at=2024_08_16T10_03_22_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd minikube.k8s.io/name=ha-286000 minikube.k8s.io/primary=false
	I0816 10:03:22.531211    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-286000-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0816 10:03:22.629050    3758 start.go:319] duration metric: took 28.944509117s to joinCluster
	I0816 10:03:22.629105    3758 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:03:22.629360    3758 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:03:22.652598    3758 out.go:177] * Verifying Kubernetes components...
	I0816 10:03:22.726472    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:03:22.952731    3758 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 10:03:22.975401    3758 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19461-1276/kubeconfig
	I0816 10:03:22.975621    3758 kapi.go:59] client config for ha-286000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.key", CAFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xa202f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0816 10:03:22.975663    3758 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0816 10:03:22.975836    3758 node_ready.go:35] waiting up to 6m0s for node "ha-286000-m02" to be "Ready" ...
	I0816 10:03:22.975886    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:22.975891    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:22.975897    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:22.975901    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:22.983180    3758 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0816 10:03:23.476314    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:23.476365    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:23.476380    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:23.476394    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:23.502874    3758 round_trippers.go:574] Response Status: 200 OK in 26 milliseconds
	I0816 10:03:23.977290    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:23.977304    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:23.977311    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:23.977315    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:23.979495    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:24.477835    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:24.477851    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:24.477858    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:24.477861    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:24.480701    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:24.977514    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:24.977568    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:24.977580    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:24.977588    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:24.980856    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:24.981299    3758 node_ready.go:53] node "ha-286000-m02" has status "Ready":"False"
	I0816 10:03:25.476235    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:25.476252    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:25.476260    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:25.476277    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:25.478567    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:25.976418    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:25.976469    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:25.976477    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:25.976480    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:25.978730    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:26.476423    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:26.476439    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:26.476445    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:26.476448    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:26.478424    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:26.975919    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:26.975934    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:26.975940    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:26.975945    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:26.977809    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:27.475966    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:27.475981    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:27.475987    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:27.475990    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:27.477769    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:27.478121    3758 node_ready.go:53] node "ha-286000-m02" has status "Ready":"False"
	I0816 10:03:27.977123    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:27.977139    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:27.977146    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:27.977150    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:27.979266    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:28.477140    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:28.477226    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:28.477254    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:28.477264    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:28.480386    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:28.977368    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:28.977407    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:28.977420    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:28.977427    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:28.980479    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:29.477521    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:29.477545    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:29.477556    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:29.477565    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:29.481179    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:29.481510    3758 node_ready.go:53] node "ha-286000-m02" has status "Ready":"False"
	I0816 10:03:29.975906    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:29.975932    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:29.975943    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:29.975949    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:29.978633    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:30.477171    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:30.477189    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:30.477198    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:30.477201    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:30.480005    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:30.977947    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:30.977973    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:30.977993    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:30.977998    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:30.982219    3758 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 10:03:31.476252    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:31.476327    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:31.476341    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:31.476347    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:31.479417    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:31.975987    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:31.976012    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:31.976023    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:31.976029    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:31.979409    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:31.979880    3758 node_ready.go:53] node "ha-286000-m02" has status "Ready":"False"
	I0816 10:03:32.477180    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:32.477207    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:32.477229    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:32.477240    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:32.480686    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:32.976225    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:32.976250    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:32.976261    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:32.976269    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:32.979533    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:33.475956    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:33.475980    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:33.475991    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:33.475997    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:33.479025    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:33.977111    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:33.977185    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:33.977198    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:33.977204    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:33.980426    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:33.980922    3758 node_ready.go:53] node "ha-286000-m02" has status "Ready":"False"
	I0816 10:03:34.476455    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:34.476470    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:34.476477    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:34.476481    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:34.478517    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:34.976996    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:34.977037    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:34.977049    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:34.977084    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:34.980500    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:35.476045    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:35.476068    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:35.476079    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:35.476086    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:35.479058    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:35.975920    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:35.975944    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:35.975956    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:35.975962    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:35.979071    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:36.477845    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:36.477872    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:36.477885    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:36.477946    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:36.481401    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:36.481890    3758 node_ready.go:53] node "ha-286000-m02" has status "Ready":"False"
	I0816 10:03:36.977033    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:36.977057    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:36.977069    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:36.977076    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:36.980476    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:37.477095    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:37.477120    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:37.477133    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:37.477141    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:37.480485    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:37.976303    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:37.976320    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:37.976343    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:37.976348    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:37.978551    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:38.476492    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:38.476517    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.476528    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.476536    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:38.480190    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:38.976091    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:38.976114    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.976124    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.976129    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:38.979741    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:38.980051    3758 node_ready.go:49] node "ha-286000-m02" has status "Ready":"True"
	I0816 10:03:38.980066    3758 node_ready.go:38] duration metric: took 16.004516347s for node "ha-286000-m02" to be "Ready" ...
	I0816 10:03:38.980077    3758 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 10:03:38.980127    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0816 10:03:38.980135    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.980142    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.980152    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:38.982937    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:38.987546    3758 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-2kqjf" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:38.987591    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-2kqjf
	I0816 10:03:38.987597    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.987603    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.987607    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:38.989339    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:38.989748    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:38.989758    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.989764    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.989768    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:38.991324    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:38.991660    3758 pod_ready.go:93] pod "coredns-6f6b679f8f-2kqjf" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:38.991671    3758 pod_ready.go:82] duration metric: took 4.111381ms for pod "coredns-6f6b679f8f-2kqjf" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:38.991678    3758 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-rfbz7" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:38.991708    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-rfbz7
	I0816 10:03:38.991713    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.991718    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.991721    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:38.993245    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:38.993624    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:38.993631    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.993640    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.993644    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:38.995095    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:38.995419    3758 pod_ready.go:93] pod "coredns-6f6b679f8f-rfbz7" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:38.995427    3758 pod_ready.go:82] duration metric: took 3.744646ms for pod "coredns-6f6b679f8f-rfbz7" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:38.995433    3758 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:38.995467    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-286000
	I0816 10:03:38.995472    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.995478    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.995482    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:38.997007    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:38.997390    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:38.997397    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.997403    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.997406    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:38.998839    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:38.999151    3758 pod_ready.go:93] pod "etcd-ha-286000" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:38.999160    3758 pod_ready.go:82] duration metric: took 3.721879ms for pod "etcd-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:38.999165    3758 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:38.999202    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-286000-m02
	I0816 10:03:38.999207    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.999213    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.999217    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:39.001189    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:39.001547    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:39.001554    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:39.001559    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:39.001563    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:39.003004    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:39.003290    3758 pod_ready.go:93] pod "etcd-ha-286000-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:39.003298    3758 pod_ready.go:82] duration metric: took 4.127582ms for pod "etcd-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:39.003307    3758 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:39.177834    3758 request.go:632] Waited for 174.443582ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-286000
	I0816 10:03:39.177948    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-286000
	I0816 10:03:39.177963    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:39.177974    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:39.177981    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:39.181371    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:39.376438    3758 request.go:632] Waited for 194.538822ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:39.376562    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:39.376574    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:39.376586    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:39.376596    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:39.379989    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:39.380425    3758 pod_ready.go:93] pod "kube-apiserver-ha-286000" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:39.380437    3758 pod_ready.go:82] duration metric: took 377.131323ms for pod "kube-apiserver-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:39.380446    3758 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:39.576633    3758 request.go:632] Waited for 196.095353ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-286000-m02
	I0816 10:03:39.576687    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-286000-m02
	I0816 10:03:39.576696    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:39.576769    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:39.576793    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:39.580164    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:39.776642    3758 request.go:632] Waited for 195.66937ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:39.776725    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:39.776735    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:39.776746    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:39.776752    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:39.780273    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:39.780714    3758 pod_ready.go:93] pod "kube-apiserver-ha-286000-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:39.780723    3758 pod_ready.go:82] duration metric: took 400.274102ms for pod "kube-apiserver-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:39.780730    3758 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:39.977598    3758 request.go:632] Waited for 196.714211ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-286000
	I0816 10:03:39.977650    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-286000
	I0816 10:03:39.977659    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:39.977670    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:39.977679    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:39.981079    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:40.177460    3758 request.go:632] Waited for 195.94045ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:40.177528    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:40.177589    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:40.177603    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:40.177609    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:40.180971    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:40.181992    3758 pod_ready.go:93] pod "kube-controller-manager-ha-286000" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:40.182036    3758 pod_ready.go:82] duration metric: took 401.277819ms for pod "kube-controller-manager-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:40.182047    3758 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:40.376258    3758 request.go:632] Waited for 194.111468ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-286000-m02
	I0816 10:03:40.376327    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-286000-m02
	I0816 10:03:40.376336    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:40.376347    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:40.376355    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:40.379476    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:40.576506    3758 request.go:632] Waited for 196.409077ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:40.576552    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:40.576560    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:40.576571    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:40.576580    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:40.579964    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:40.580641    3758 pod_ready.go:93] pod "kube-controller-manager-ha-286000-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:40.580654    3758 pod_ready.go:82] duration metric: took 398.607786ms for pod "kube-controller-manager-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:40.580663    3758 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pt669" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:40.778194    3758 request.go:632] Waited for 197.469682ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pt669
	I0816 10:03:40.778324    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pt669
	I0816 10:03:40.778333    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:40.778345    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:40.778352    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:40.781925    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:40.976623    3758 request.go:632] Waited for 194.04689ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:40.976752    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:40.976761    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:40.976772    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:40.976780    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:40.980022    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:40.980473    3758 pod_ready.go:93] pod "kube-proxy-pt669" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:40.980485    3758 pod_ready.go:82] duration metric: took 399.822578ms for pod "kube-proxy-pt669" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:40.980494    3758 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-w4nt2" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:41.177361    3758 request.go:632] Waited for 196.811255ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w4nt2
	I0816 10:03:41.177533    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w4nt2
	I0816 10:03:41.177545    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:41.177556    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:41.177563    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:41.181543    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:41.377692    3758 request.go:632] Waited for 195.583238ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:41.377791    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:41.377802    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:41.377812    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:41.377819    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:41.381134    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:41.381716    3758 pod_ready.go:93] pod "kube-proxy-w4nt2" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:41.381726    3758 pod_ready.go:82] duration metric: took 401.234529ms for pod "kube-proxy-w4nt2" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:41.381733    3758 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:41.577020    3758 request.go:632] Waited for 195.246357ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-286000
	I0816 10:03:41.577073    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-286000
	I0816 10:03:41.577125    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:41.577138    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:41.577147    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:41.580051    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:41.777879    3758 request.go:632] Waited for 197.366145ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:41.777974    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:41.777983    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:41.777995    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:41.778003    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:41.781434    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:41.782012    3758 pod_ready.go:93] pod "kube-scheduler-ha-286000" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:41.782023    3758 pod_ready.go:82] duration metric: took 400.292624ms for pod "kube-scheduler-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:41.782051    3758 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:41.976356    3758 request.go:632] Waited for 194.23341ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-286000-m02
	I0816 10:03:41.976463    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-286000-m02
	I0816 10:03:41.976473    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:41.976485    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:41.976494    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:41.980088    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:42.176952    3758 request.go:632] Waited for 196.161737ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:42.177009    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:42.177018    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:42.177026    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:42.177083    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:42.182888    3758 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0816 10:03:42.183179    3758 pod_ready.go:93] pod "kube-scheduler-ha-286000-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:42.183188    3758 pod_ready.go:82] duration metric: took 401.133714ms for pod "kube-scheduler-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:42.183203    3758 pod_ready.go:39] duration metric: took 3.203173842s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 10:03:42.183221    3758 api_server.go:52] waiting for apiserver process to appear ...
	I0816 10:03:42.183274    3758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 10:03:42.195671    3758 api_server.go:72] duration metric: took 19.566910589s to wait for apiserver process to appear ...
	I0816 10:03:42.195682    3758 api_server.go:88] waiting for apiserver healthz status ...
	I0816 10:03:42.195698    3758 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0816 10:03:42.198837    3758 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0816 10:03:42.198870    3758 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0816 10:03:42.198874    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:42.198880    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:42.198884    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:42.199389    3758 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0816 10:03:42.199468    3758 api_server.go:141] control plane version: v1.31.0
	I0816 10:03:42.199478    3758 api_server.go:131] duration metric: took 3.791893ms to wait for apiserver health ...
	I0816 10:03:42.199486    3758 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 10:03:42.378048    3758 request.go:632] Waited for 178.500938ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0816 10:03:42.378156    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0816 10:03:42.378166    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:42.378178    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:42.378186    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:42.382776    3758 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 10:03:42.386068    3758 system_pods.go:59] 17 kube-system pods found
	I0816 10:03:42.386083    3758 system_pods.go:61] "coredns-6f6b679f8f-2kqjf" [be18cd09-3e3b-4749-bf29-7001a879f593] Running
	I0816 10:03:42.386087    3758 system_pods.go:61] "coredns-6f6b679f8f-rfbz7" [a593e27a-e38b-46c7-a603-44963c31c095] Running
	I0816 10:03:42.386090    3758 system_pods.go:61] "etcd-ha-286000" [c45bc623-befe-4e88-8da1-bb4f7d7be5b2] Running
	I0816 10:03:42.386093    3758 system_pods.go:61] "etcd-ha-286000-m02" [e14fbe0c-4527-4cbb-8c13-01c0b32a8d15] Running
	I0816 10:03:42.386096    3758 system_pods.go:61] "kindnet-t9kjf" [20c57f28-5e86-44a4-8a2a-07e1e240abf2] Running
	I0816 10:03:42.386099    3758 system_pods.go:61] "kindnet-whqxb" [1cc5291b-52ea-44b5-b87c-607d46b5281a] Running
	I0816 10:03:42.386102    3758 system_pods.go:61] "kube-apiserver-ha-286000" [c0d298a9-931b-4084-9e5f-01a88de27456] Running
	I0816 10:03:42.386104    3758 system_pods.go:61] "kube-apiserver-ha-286000-m02" [3666c131-2b88-483a-ab8c-b18cb8db7d6f] Running
	I0816 10:03:42.386120    3758 system_pods.go:61] "kube-controller-manager-ha-286000" [5a4f4c5d-d89a-4e5f-b022-479ca31f64cd] Running
	I0816 10:03:42.386127    3758 system_pods.go:61] "kube-controller-manager-ha-286000-m02" [a347c3d4-fa47-49ea-93a0-83087017a2bd] Running
	I0816 10:03:42.386130    3758 system_pods.go:61] "kube-proxy-pt669" [798c956f-8f08-465a-b0d9-90b65f703f80] Running
	I0816 10:03:42.386139    3758 system_pods.go:61] "kube-proxy-w4nt2" [79ce2248-f8fd-4a3b-aeb7-f81d7ad64564] Running
	I0816 10:03:42.386143    3758 system_pods.go:61] "kube-scheduler-ha-286000" [b4af80e0-cbb0-4257-a212-3585faff0875] Running
	I0816 10:03:42.386145    3758 system_pods.go:61] "kube-scheduler-ha-286000-m02" [89920e93-c959-47ec-8a2c-f2b63e8c3d3a] Running
	I0816 10:03:42.386153    3758 system_pods.go:61] "kube-vip-ha-286000" [b0f85be2-2afb-44b0-a701-161b24529349] Running
	I0816 10:03:42.386156    3758 system_pods.go:61] "kube-vip-ha-286000-m02" [4982dc53-946d-4f75-8e9e-452ef39ff2fa] Running
	I0816 10:03:42.386159    3758 system_pods.go:61] "storage-provisioner" [4805d53b-2db3-4092-a3f2-d4a854e93adc] Running
	I0816 10:03:42.386163    3758 system_pods.go:74] duration metric: took 186.675963ms to wait for pod list to return data ...
	I0816 10:03:42.386170    3758 default_sa.go:34] waiting for default service account to be created ...
	I0816 10:03:42.577030    3758 request.go:632] Waited for 190.795237ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0816 10:03:42.577183    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0816 10:03:42.577198    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:42.577209    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:42.577215    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:42.581020    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:42.581204    3758 default_sa.go:45] found service account: "default"
	I0816 10:03:42.581216    3758 default_sa.go:55] duration metric: took 195.045213ms for default service account to be created ...
	I0816 10:03:42.581224    3758 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 10:03:42.777160    3758 request.go:632] Waited for 195.848894ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0816 10:03:42.777252    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0816 10:03:42.777268    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:42.777285    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:42.777303    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:42.781637    3758 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 10:03:42.784962    3758 system_pods.go:86] 17 kube-system pods found
	I0816 10:03:42.784974    3758 system_pods.go:89] "coredns-6f6b679f8f-2kqjf" [be18cd09-3e3b-4749-bf29-7001a879f593] Running
	I0816 10:03:42.784978    3758 system_pods.go:89] "coredns-6f6b679f8f-rfbz7" [a593e27a-e38b-46c7-a603-44963c31c095] Running
	I0816 10:03:42.784981    3758 system_pods.go:89] "etcd-ha-286000" [c45bc623-befe-4e88-8da1-bb4f7d7be5b2] Running
	I0816 10:03:42.784984    3758 system_pods.go:89] "etcd-ha-286000-m02" [e14fbe0c-4527-4cbb-8c13-01c0b32a8d15] Running
	I0816 10:03:42.784987    3758 system_pods.go:89] "kindnet-t9kjf" [20c57f28-5e86-44a4-8a2a-07e1e240abf2] Running
	I0816 10:03:42.784990    3758 system_pods.go:89] "kindnet-whqxb" [1cc5291b-52ea-44b5-b87c-607d46b5281a] Running
	I0816 10:03:42.784992    3758 system_pods.go:89] "kube-apiserver-ha-286000" [c0d298a9-931b-4084-9e5f-01a88de27456] Running
	I0816 10:03:42.784995    3758 system_pods.go:89] "kube-apiserver-ha-286000-m02" [3666c131-2b88-483a-ab8c-b18cb8db7d6f] Running
	I0816 10:03:42.784997    3758 system_pods.go:89] "kube-controller-manager-ha-286000" [5a4f4c5d-d89a-4e5f-b022-479ca31f64cd] Running
	I0816 10:03:42.785001    3758 system_pods.go:89] "kube-controller-manager-ha-286000-m02" [a347c3d4-fa47-49ea-93a0-83087017a2bd] Running
	I0816 10:03:42.785003    3758 system_pods.go:89] "kube-proxy-pt669" [798c956f-8f08-465a-b0d9-90b65f703f80] Running
	I0816 10:03:42.785006    3758 system_pods.go:89] "kube-proxy-w4nt2" [79ce2248-f8fd-4a3b-aeb7-f81d7ad64564] Running
	I0816 10:03:42.785009    3758 system_pods.go:89] "kube-scheduler-ha-286000" [b4af80e0-cbb0-4257-a212-3585faff0875] Running
	I0816 10:03:42.785011    3758 system_pods.go:89] "kube-scheduler-ha-286000-m02" [89920e93-c959-47ec-8a2c-f2b63e8c3d3a] Running
	I0816 10:03:42.785015    3758 system_pods.go:89] "kube-vip-ha-286000" [b0f85be2-2afb-44b0-a701-161b24529349] Running
	I0816 10:03:42.785017    3758 system_pods.go:89] "kube-vip-ha-286000-m02" [4982dc53-946d-4f75-8e9e-452ef39ff2fa] Running
	I0816 10:03:42.785023    3758 system_pods.go:89] "storage-provisioner" [4805d53b-2db3-4092-a3f2-d4a854e93adc] Running
	I0816 10:03:42.785028    3758 system_pods.go:126] duration metric: took 203.80313ms to wait for k8s-apps to be running ...
	I0816 10:03:42.785034    3758 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 10:03:42.785088    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 10:03:42.796381    3758 system_svc.go:56] duration metric: took 11.343439ms WaitForService to wait for kubelet
	I0816 10:03:42.796397    3758 kubeadm.go:582] duration metric: took 20.16764951s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 10:03:42.796409    3758 node_conditions.go:102] verifying NodePressure condition ...
	I0816 10:03:42.977594    3758 request.go:632] Waited for 181.143005ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0816 10:03:42.977695    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0816 10:03:42.977706    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:42.977719    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:42.977724    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:42.981373    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:42.981988    3758 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 10:03:42.982013    3758 node_conditions.go:123] node cpu capacity is 2
	I0816 10:03:42.982039    3758 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 10:03:42.982043    3758 node_conditions.go:123] node cpu capacity is 2
	I0816 10:03:42.982046    3758 node_conditions.go:105] duration metric: took 185.637747ms to run NodePressure ...
	I0816 10:03:42.982055    3758 start.go:241] waiting for startup goroutines ...
	I0816 10:03:42.982079    3758 start.go:255] writing updated cluster config ...
	I0816 10:03:43.003740    3758 out.go:201] 
	I0816 10:03:43.024885    3758 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:03:43.024976    3758 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:03:43.047776    3758 out.go:177] * Starting "ha-286000-m03" control-plane node in "ha-286000" cluster
	I0816 10:03:43.137621    3758 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 10:03:43.137655    3758 cache.go:56] Caching tarball of preloaded images
	I0816 10:03:43.137861    3758 preload.go:172] Found /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0816 10:03:43.137884    3758 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 10:03:43.138022    3758 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:03:43.159475    3758 start.go:360] acquireMachinesLock for ha-286000-m03: {Name:mk0510f0432b1e24fd07d899155e85dff8756d5a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:03:43.159592    3758 start.go:364] duration metric: took 91.442µs to acquireMachinesLock for "ha-286000-m03"
	I0816 10:03:43.159625    3758 start.go:93] Provisioning new machine with config: &{Name:ha-286000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:ha-286000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ing
ress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:03:43.159736    3758 start.go:125] createHost starting for "m03" (driver="hyperkit")
	I0816 10:03:43.180569    3758 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0816 10:03:43.180719    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:03:43.180762    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:03:43.191087    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51098
	I0816 10:03:43.191558    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:03:43.191889    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:03:43.191899    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:03:43.192106    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:03:43.192302    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetMachineName
	I0816 10:03:43.192394    3758 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:03:43.192506    3758 start.go:159] libmachine.API.Create for "ha-286000" (driver="hyperkit")
	I0816 10:03:43.192526    3758 client.go:168] LocalClient.Create starting
	I0816 10:03:43.192559    3758 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem
	I0816 10:03:43.192606    3758 main.go:141] libmachine: Decoding PEM data...
	I0816 10:03:43.192620    3758 main.go:141] libmachine: Parsing certificate...
	I0816 10:03:43.192675    3758 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem
	I0816 10:03:43.192704    3758 main.go:141] libmachine: Decoding PEM data...
	I0816 10:03:43.192714    3758 main.go:141] libmachine: Parsing certificate...
	I0816 10:03:43.192726    3758 main.go:141] libmachine: Running pre-create checks...
	I0816 10:03:43.192731    3758 main.go:141] libmachine: (ha-286000-m03) Calling .PreCreateCheck
	I0816 10:03:43.192813    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:43.192833    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetConfigRaw
	I0816 10:03:43.202038    3758 main.go:141] libmachine: Creating machine...
	I0816 10:03:43.202062    3758 main.go:141] libmachine: (ha-286000-m03) Calling .Create
	I0816 10:03:43.202302    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:43.202574    3758 main.go:141] libmachine: (ha-286000-m03) DBG | I0816 10:03:43.202284    3848 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19461-1276/.minikube
	I0816 10:03:43.202705    3758 main.go:141] libmachine: (ha-286000-m03) Downloading /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19461-1276/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0816 10:03:43.529965    3758 main.go:141] libmachine: (ha-286000-m03) DBG | I0816 10:03:43.529902    3848 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/id_rsa...
	I0816 10:03:43.631057    3758 main.go:141] libmachine: (ha-286000-m03) DBG | I0816 10:03:43.630965    3848 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/ha-286000-m03.rawdisk...
	I0816 10:03:43.631074    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Writing magic tar header
	I0816 10:03:43.631083    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Writing SSH key tar header
	I0816 10:03:43.631711    3758 main.go:141] libmachine: (ha-286000-m03) DBG | I0816 10:03:43.631681    3848 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03 ...
	I0816 10:03:44.155270    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:44.155286    3758 main.go:141] libmachine: (ha-286000-m03) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/hyperkit.pid
	I0816 10:03:44.155318    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Using UUID 408efb18-ff91-428f-b414-6eb0d982a779
	I0816 10:03:44.181981    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Generated MAC 8a:e:de:5b:b5:8b
	I0816 10:03:44.182010    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000
	I0816 10:03:44.182059    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"408efb18-ff91-428f-b414-6eb0d982a779", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 10:03:44.182101    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"408efb18-ff91-428f-b414-6eb0d982a779", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 10:03:44.182228    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "408efb18-ff91-428f-b414-6eb0d982a779", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/ha-286000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/bzimage,/Users/jenkins/minikube-integration/19461-1276/.minikube/machine
s/ha-286000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000"}
	I0816 10:03:44.182306    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 408efb18-ff91-428f-b414-6eb0d982a779 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/ha-286000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/bzimage,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000"
	I0816 10:03:44.182332    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0816 10:03:44.185479    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 DEBUG: hyperkit: Pid is 3849
	I0816 10:03:44.185900    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Attempt 0
	I0816 10:03:44.185912    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:44.186025    3758 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid from json: 3849
	I0816 10:03:44.186994    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Searching for 8a:e:de:5b:b5:8b in /var/db/dhcpd_leases ...
	I0816 10:03:44.187062    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0816 10:03:44.187079    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0d7b0}
	I0816 10:03:44.187114    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:03:44.187142    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:03:44.187168    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:03:44.187185    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:03:44.193352    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0816 10:03:44.201781    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0816 10:03:44.202595    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:03:44.202610    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:03:44.202637    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:03:44.202653    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:03:44.588441    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0816 10:03:44.588458    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0816 10:03:44.703100    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:03:44.703117    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:03:44.703125    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:03:44.703135    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:03:44.703952    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0816 10:03:44.703963    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0816 10:03:46.188345    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Attempt 1
	I0816 10:03:46.188360    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:46.188480    3758 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid from json: 3849
	I0816 10:03:46.189255    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Searching for 8a:e:de:5b:b5:8b in /var/db/dhcpd_leases ...
	I0816 10:03:46.189306    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0816 10:03:46.189321    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0d7b0}
	I0816 10:03:46.189335    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:03:46.189352    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:03:46.189359    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:03:46.189385    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:03:48.190823    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Attempt 2
	I0816 10:03:48.190838    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:48.190916    3758 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid from json: 3849
	I0816 10:03:48.191692    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Searching for 8a:e:de:5b:b5:8b in /var/db/dhcpd_leases ...
	I0816 10:03:48.191747    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0816 10:03:48.191757    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0d7b0}
	I0816 10:03:48.191779    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:03:48.191787    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:03:48.191803    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:03:48.191815    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:03:50.191897    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Attempt 3
	I0816 10:03:50.191913    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:50.191977    3758 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid from json: 3849
	I0816 10:03:50.192744    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Searching for 8a:e:de:5b:b5:8b in /var/db/dhcpd_leases ...
	I0816 10:03:50.192793    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0816 10:03:50.192802    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0d7b0}
	I0816 10:03:50.192811    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:03:50.192820    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:03:50.192836    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:03:50.192850    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:03:50.310705    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:50 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0816 10:03:50.310770    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:50 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0816 10:03:50.310783    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:50 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0816 10:03:50.334054    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:50 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0816 10:03:52.193443    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Attempt 4
	I0816 10:03:52.193459    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:52.193535    3758 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid from json: 3849
	I0816 10:03:52.194319    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Searching for 8a:e:de:5b:b5:8b in /var/db/dhcpd_leases ...
	I0816 10:03:52.194367    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0816 10:03:52.194379    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0d7b0}
	I0816 10:03:52.194390    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:03:52.194399    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:03:52.194408    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:03:52.194419    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:03:54.195434    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Attempt 5
	I0816 10:03:54.195456    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:54.195540    3758 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid from json: 3849
	I0816 10:03:54.196406    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Searching for 8a:e:de:5b:b5:8b in /var/db/dhcpd_leases ...
	I0816 10:03:54.196510    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Found 6 entries in /var/db/dhcpd_leases!
	I0816 10:03:54.196530    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66c0d7f9}
	I0816 10:03:54.196551    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Found match: 8a:e:de:5b:b5:8b
	I0816 10:03:54.196553    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetConfigRaw
	I0816 10:03:54.196565    3758 main.go:141] libmachine: (ha-286000-m03) DBG | IP: 192.169.0.7
	I0816 10:03:54.197236    3758 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:03:54.197351    3758 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:03:54.197441    3758 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0816 10:03:54.197454    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetState
	I0816 10:03:54.197541    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:54.197611    3758 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid from json: 3849
	I0816 10:03:54.198439    3758 main.go:141] libmachine: Detecting operating system of created instance...
	I0816 10:03:54.198446    3758 main.go:141] libmachine: Waiting for SSH to be available...
	I0816 10:03:54.198450    3758 main.go:141] libmachine: Getting to WaitForSSH function...
	I0816 10:03:54.198454    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:54.198532    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:54.198612    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:54.198695    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:54.198783    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:54.198892    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:03:54.199063    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:03:54.199071    3758 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0816 10:03:55.266165    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 10:03:55.266179    3758 main.go:141] libmachine: Detecting the provisioner...
	I0816 10:03:55.266185    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:55.266318    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:55.266413    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.266507    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.266601    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:55.266731    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:03:55.266879    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:03:55.266886    3758 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0816 10:03:55.332778    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0816 10:03:55.332822    3758 main.go:141] libmachine: found compatible host: buildroot
	I0816 10:03:55.332828    3758 main.go:141] libmachine: Provisioning with buildroot...
	I0816 10:03:55.332834    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetMachineName
	I0816 10:03:55.332971    3758 buildroot.go:166] provisioning hostname "ha-286000-m03"
	I0816 10:03:55.332982    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetMachineName
	I0816 10:03:55.333079    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:55.333186    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:55.333277    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.333375    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.333463    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:55.333593    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:03:55.333737    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:03:55.333746    3758 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-286000-m03 && echo "ha-286000-m03" | sudo tee /etc/hostname
	I0816 10:03:55.411701    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-286000-m03
	
	I0816 10:03:55.411715    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:55.411851    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:55.411965    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.412057    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.412140    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:55.412274    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:03:55.412418    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:03:55.412429    3758 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-286000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-286000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-286000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 10:03:55.485102    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 10:03:55.485118    3758 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19461-1276/.minikube CaCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19461-1276/.minikube}
	I0816 10:03:55.485127    3758 buildroot.go:174] setting up certificates
	I0816 10:03:55.485134    3758 provision.go:84] configureAuth start
	I0816 10:03:55.485141    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetMachineName
	I0816 10:03:55.485284    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetIP
	I0816 10:03:55.485375    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:55.485445    3758 provision.go:143] copyHostCerts
	I0816 10:03:55.485474    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem
	I0816 10:03:55.485527    3758 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem, removing ...
	I0816 10:03:55.485533    3758 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem
	I0816 10:03:55.485654    3758 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem (1123 bytes)
	I0816 10:03:55.485864    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem
	I0816 10:03:55.485895    3758 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem, removing ...
	I0816 10:03:55.485900    3758 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem
	I0816 10:03:55.486003    3758 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem (1679 bytes)
	I0816 10:03:55.486172    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem
	I0816 10:03:55.486206    3758 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem, removing ...
	I0816 10:03:55.486215    3758 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem
	I0816 10:03:55.486286    3758 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem (1078 bytes)
	I0816 10:03:55.486435    3758 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem org=jenkins.ha-286000-m03 san=[127.0.0.1 192.169.0.7 ha-286000-m03 localhost minikube]
	I0816 10:03:55.572803    3758 provision.go:177] copyRemoteCerts
	I0816 10:03:55.572855    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 10:03:55.572869    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:55.573015    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:55.573114    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.573208    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:55.573304    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/id_rsa Username:docker}
	I0816 10:03:55.612104    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0816 10:03:55.612186    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0816 10:03:55.632484    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0816 10:03:55.632548    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 10:03:55.652677    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0816 10:03:55.652752    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0816 10:03:55.672555    3758 provision.go:87] duration metric: took 187.4165ms to configureAuth
	I0816 10:03:55.672568    3758 buildroot.go:189] setting minikube options for container-runtime
	I0816 10:03:55.672735    3758 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:03:55.672748    3758 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:03:55.672889    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:55.672982    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:55.673071    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.673157    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.673245    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:55.673371    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:03:55.673496    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:03:55.673504    3758 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0816 10:03:55.738053    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0816 10:03:55.738064    3758 buildroot.go:70] root file system type: tmpfs
	I0816 10:03:55.738156    3758 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0816 10:03:55.738167    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:55.738306    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:55.738397    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.738489    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.738573    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:55.738694    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:03:55.738841    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:03:55.738890    3758 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0816 10:03:55.813774    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0816 10:03:55.813790    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:55.813924    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:55.814019    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.814103    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.814193    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:55.814320    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:03:55.814470    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:03:55.814484    3758 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0816 10:03:57.356529    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0816 10:03:57.356544    3758 main.go:141] libmachine: Checking connection to Docker...
	I0816 10:03:57.356549    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetURL
	I0816 10:03:57.356692    3758 main.go:141] libmachine: Docker is up and running!
	I0816 10:03:57.356700    3758 main.go:141] libmachine: Reticulating splines...
	I0816 10:03:57.356705    3758 client.go:171] duration metric: took 14.164443579s to LocalClient.Create
	I0816 10:03:57.356717    3758 start.go:167] duration metric: took 14.164482312s to libmachine.API.Create "ha-286000"
	I0816 10:03:57.356727    3758 start.go:293] postStartSetup for "ha-286000-m03" (driver="hyperkit")
	I0816 10:03:57.356734    3758 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 10:03:57.356744    3758 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:03:57.356919    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 10:03:57.356935    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:57.357030    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:57.357130    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:57.357273    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:57.357390    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/id_rsa Username:docker}
	I0816 10:03:57.398231    3758 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 10:03:57.402472    3758 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 10:03:57.402483    3758 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19461-1276/.minikube/addons for local assets ...
	I0816 10:03:57.402571    3758 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19461-1276/.minikube/files for local assets ...
	I0816 10:03:57.402723    3758 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> 18312.pem in /etc/ssl/certs
	I0816 10:03:57.402729    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> /etc/ssl/certs/18312.pem
	I0816 10:03:57.402895    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 10:03:57.412226    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem --> /etc/ssl/certs/18312.pem (1708 bytes)
	I0816 10:03:57.443242    3758 start.go:296] duration metric: took 86.507779ms for postStartSetup
	I0816 10:03:57.443268    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetConfigRaw
	I0816 10:03:57.443871    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetIP
	I0816 10:03:57.444031    3758 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:03:57.444386    3758 start.go:128] duration metric: took 14.284913269s to createHost
	I0816 10:03:57.444401    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:57.444490    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:57.444568    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:57.444658    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:57.444732    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:57.444842    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:03:57.444963    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:03:57.444971    3758 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 10:03:57.509634    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723827837.586061636
	
	I0816 10:03:57.509648    3758 fix.go:216] guest clock: 1723827837.586061636
	I0816 10:03:57.509653    3758 fix.go:229] Guest: 2024-08-16 10:03:57.586061636 -0700 PDT Remote: 2024-08-16 10:03:57.444395 -0700 PDT m=+129.370490857 (delta=141.666636ms)
	I0816 10:03:57.509665    3758 fix.go:200] guest clock delta is within tolerance: 141.666636ms
	I0816 10:03:57.509669    3758 start.go:83] releasing machines lock for "ha-286000-m03", held for 14.350339851s
	I0816 10:03:57.509691    3758 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:03:57.509832    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetIP
	I0816 10:03:57.542277    3758 out.go:177] * Found network options:
	I0816 10:03:57.562266    3758 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6
	W0816 10:03:57.583040    3758 proxy.go:119] fail to check proxy env: Error ip not in block
	W0816 10:03:57.583073    3758 proxy.go:119] fail to check proxy env: Error ip not in block
	I0816 10:03:57.583092    3758 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:03:57.583964    3758 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:03:57.584310    3758 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:03:57.584469    3758 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 10:03:57.584522    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	W0816 10:03:57.584570    3758 proxy.go:119] fail to check proxy env: Error ip not in block
	W0816 10:03:57.584596    3758 proxy.go:119] fail to check proxy env: Error ip not in block
	I0816 10:03:57.584708    3758 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0816 10:03:57.584735    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:57.584813    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:57.584995    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:57.585023    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:57.585164    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:57.585195    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:57.585338    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/id_rsa Username:docker}
	I0816 10:03:57.585406    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:57.585566    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/id_rsa Username:docker}
	W0816 10:03:57.623481    3758 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 10:03:57.623541    3758 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 10:03:57.670278    3758 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 10:03:57.670298    3758 start.go:495] detecting cgroup driver to use...
	I0816 10:03:57.670408    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 10:03:57.686134    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0816 10:03:57.694681    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0816 10:03:57.703134    3758 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0816 10:03:57.703198    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0816 10:03:57.711736    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 10:03:57.720041    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0816 10:03:57.728421    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 10:03:57.737252    3758 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 10:03:57.746356    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0816 10:03:57.755117    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0816 10:03:57.763962    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0816 10:03:57.772639    3758 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 10:03:57.780684    3758 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 10:03:57.788938    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:03:57.893932    3758 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0816 10:03:57.913150    3758 start.go:495] detecting cgroup driver to use...
	I0816 10:03:57.913224    3758 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0816 10:03:57.932274    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 10:03:57.945000    3758 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 10:03:57.965222    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 10:03:57.977460    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 10:03:57.988259    3758 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0816 10:03:58.006552    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 10:03:58.017261    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 10:03:58.032545    3758 ssh_runner.go:195] Run: which cri-dockerd
	I0816 10:03:58.035464    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0816 10:03:58.042742    3758 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0816 10:03:58.056747    3758 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0816 10:03:58.163471    3758 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0816 10:03:58.281020    3758 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0816 10:03:58.281044    3758 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0816 10:03:58.294992    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:03:58.399122    3758 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0816 10:04:59.415916    3758 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.017935847s)
	I0816 10:04:59.415981    3758 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0816 10:04:59.453653    3758 out.go:201] 
	W0816 10:04:59.474040    3758 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 16 17:03:56 ha-286000-m03 systemd[1]: Starting Docker Application Container Engine...
	Aug 16 17:03:56 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:56.200976866Z" level=info msg="Starting up"
	Aug 16 17:03:56 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:56.201455258Z" level=info msg="containerd not running, starting managed containerd"
	Aug 16 17:03:56 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:56.201908035Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=519
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.218394236Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.233514110Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.233584256Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.233654513Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.233690141Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.233768300Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.233806284Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.233955562Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.233996222Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.234026670Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.234054929Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.234137090Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.234382642Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.235934793Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.235985918Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.236117022Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.236159609Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.236256962Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.236327154Z" level=info msg="metadata content store policy set" policy=shared
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.238422470Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.238509436Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.238555940Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.238594343Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.238633954Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.238734892Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.238944917Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239083528Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239123172Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239153894Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239184820Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239214617Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239248728Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239285992Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239333696Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239368624Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239411950Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239451554Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239490333Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239523121Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239554681Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239589296Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239621390Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239653930Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239683266Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239712579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239742056Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239772828Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239801901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239830636Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239859570Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239893189Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239929555Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239960582Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239991787Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240076099Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240121809Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240153088Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240182438Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240210918Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240240067Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240268586Z" level=info msg="NRI interface is disabled by configuration."
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240520719Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240609661Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240710485Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240795617Z" level=info msg="containerd successfully booted in 0.023088s"
	Aug 16 17:03:57 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:57.221812848Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 16 17:03:57 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:57.228854621Z" level=info msg="Loading containers: start."
	Aug 16 17:03:57 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:57.312023793Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 16 17:03:57 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:57.390459780Z" level=info msg="Loading containers: done."
	Aug 16 17:03:57 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:57.404746652Z" level=info msg="Docker daemon" commit=f9522e5 containerd-snapshotter=false storage-driver=overlay2 version=27.1.2
	Aug 16 17:03:57 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:57.404864935Z" level=info msg="Daemon has completed initialization"
	Aug 16 17:03:57 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:57.430646618Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 16 17:03:57 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:57.430810646Z" level=info msg="API listen on [::]:2376"
	Aug 16 17:03:57 ha-286000-m03 systemd[1]: Started Docker Application Container Engine.
	Aug 16 17:03:58 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:58.488220464Z" level=info msg="Processing signal 'terminated'"
	Aug 16 17:03:58 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:58.489144852Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 16 17:03:58 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:58.489473398Z" level=info msg="Daemon shutdown complete"
	Aug 16 17:03:58 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:58.489515121Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 16 17:03:58 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:58.489527809Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 16 17:03:58 ha-286000-m03 systemd[1]: Stopping Docker Application Container Engine...
	Aug 16 17:03:59 ha-286000-m03 systemd[1]: docker.service: Deactivated successfully.
	Aug 16 17:03:59 ha-286000-m03 systemd[1]: Stopped Docker Application Container Engine.
	Aug 16 17:03:59 ha-286000-m03 systemd[1]: Starting Docker Application Container Engine...
	Aug 16 17:03:59 ha-286000-m03 dockerd[913]: time="2024-08-16T17:03:59.520198872Z" level=info msg="Starting up"
	Aug 16 17:04:59 ha-286000-m03 dockerd[913]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 16 17:04:59 ha-286000-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 16 17:04:59 ha-286000-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 16 17:04:59 ha-286000-m03 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0816 10:04:59.474111    3758 out.go:270] * 
	W0816 10:04:59.474938    3758 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 10:04:59.558192    3758 out.go:201] 
	
	
	==> Docker <==
	Aug 16 17:02:50 ha-286000 dockerd[1241]: time="2024-08-16T17:02:50.631803753Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:02:50 ha-286000 dockerd[1241]: time="2024-08-16T17:02:50.636334140Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 16 17:02:50 ha-286000 dockerd[1241]: time="2024-08-16T17:02:50.636542015Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 16 17:02:50 ha-286000 dockerd[1241]: time="2024-08-16T17:02:50.636768594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:02:50 ha-286000 dockerd[1241]: time="2024-08-16T17:02:50.637206626Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:02:50 ha-286000 cri-dockerd[1134]: time="2024-08-16T17:02:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fbd84fb813c9034ce56be933a9dc0c8539c5c831abbd163996da762065f0c208/resolv.conf as [nameserver 192.169.0.1]"
	Aug 16 17:02:50 ha-286000 cri-dockerd[1134]: time="2024-08-16T17:02:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/452942e267927b3a15327ece33bffe6fb305db22e6a72ff9b65d4acfe89f3891/resolv.conf as [nameserver 192.169.0.1]"
	Aug 16 17:02:50 ha-286000 dockerd[1241]: time="2024-08-16T17:02:50.842515621Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 16 17:02:50 ha-286000 dockerd[1241]: time="2024-08-16T17:02:50.842901741Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 16 17:02:50 ha-286000 dockerd[1241]: time="2024-08-16T17:02:50.843146100Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:02:50 ha-286000 dockerd[1241]: time="2024-08-16T17:02:50.843415719Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:02:50 ha-286000 dockerd[1241]: time="2024-08-16T17:02:50.885181891Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 16 17:02:50 ha-286000 dockerd[1241]: time="2024-08-16T17:02:50.885227629Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 16 17:02:50 ha-286000 dockerd[1241]: time="2024-08-16T17:02:50.885240492Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:02:50 ha-286000 dockerd[1241]: time="2024-08-16T17:02:50.885438543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:05:03 ha-286000 dockerd[1241]: time="2024-08-16T17:05:03.188642191Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 16 17:05:03 ha-286000 dockerd[1241]: time="2024-08-16T17:05:03.188710762Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 16 17:05:03 ha-286000 dockerd[1241]: time="2024-08-16T17:05:03.188723920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:05:03 ha-286000 dockerd[1241]: time="2024-08-16T17:05:03.188799320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:05:03 ha-286000 cri-dockerd[1134]: time="2024-08-16T17:05:03Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1873ade92edb9d51940849fdee8cb6db41b03368956580ec6099a918aff580e1/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Aug 16 17:05:04 ha-286000 cri-dockerd[1134]: time="2024-08-16T17:05:04Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Aug 16 17:05:04 ha-286000 dockerd[1241]: time="2024-08-16T17:05:04.522783748Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 16 17:05:04 ha-286000 dockerd[1241]: time="2024-08-16T17:05:04.522878436Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 16 17:05:04 ha-286000 dockerd[1241]: time="2024-08-16T17:05:04.522904596Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:05:04 ha-286000 dockerd[1241]: time="2024-08-16T17:05:04.523003751Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5bbe7a8cc492e       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   11 minutes ago      Running             busybox                   0                   1873ade92edb9       busybox-7dff88458-dvmvk
	bcd7170b050a5       cbb01a7bd410d                                                                                         14 minutes ago      Running             coredns                   0                   452942e267927       coredns-6f6b679f8f-rfbz7
	60d3d03e297ce       cbb01a7bd410d                                                                                         14 minutes ago      Running             coredns                   0                   fbd84fb813c90       coredns-6f6b679f8f-2kqjf
	f55b59f53c6eb       6e38f40d628db                                                                                         14 minutes ago      Running             storage-provisioner       0                   482990a4b00e6       storage-provisioner
	2677394dcd66f       kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166              14 minutes ago      Running             kindnet-cni               0                   afaf657e85c32       kindnet-whqxb
	81f6c96d46494       ad83b2ca7b09e                                                                                         14 minutes ago      Running             kube-proxy                0                   dfb822fc27b5e       kube-proxy-w4nt2
	cafa34c562392       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     14 minutes ago      Running             kube-vip                  0                   69fba128b04a6       kube-vip-ha-286000
	8f5867ee99d9b       045733566833c                                                                                         14 minutes ago      Running             kube-controller-manager   0                   e87fd3e77384f       kube-controller-manager-ha-286000
	f7b2e9efdd94f       1766f54c897f0                                                                                         14 minutes ago      Running             kube-scheduler            0                   8e2b186980aff       kube-scheduler-ha-286000
	d8dadff6cec78       2e96e5913fc06                                                                                         14 minutes ago      Running             etcd                      0                   cdb14ff7d8896       etcd-ha-286000
	5f3adc202a7a2       604f5db92eaa8                                                                                         14 minutes ago      Running             kube-apiserver            0                   818ee6dafe6c9       kube-apiserver-ha-286000
	
	
	==> coredns [60d3d03e297c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:37081 - 62351 "HINFO IN 7437422972060865489.3041931607585121070. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.009759141s
	[INFO] 10.244.0.4:34542 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.00094119s
	[INFO] 10.244.0.4:38912 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 0.000921826s
	[INFO] 10.244.1.2:39585 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000070042s
	[INFO] 10.244.0.4:39673 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000101817s
	[INFO] 10.244.0.4:55820 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000225412s
	[INFO] 10.244.1.2:48427 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135793s
	[INFO] 10.244.1.2:33204 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.000791531s
	[INFO] 10.244.1.2:51238 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000081236s
	[INFO] 10.244.1.2:42705 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000135743s
	[INFO] 10.244.1.2:33254 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000106295s
	[INFO] 10.244.0.4:53900 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000067988s
	[INFO] 10.244.1.2:48994 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00006771s
	[INFO] 10.244.1.2:56734 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000138883s
	[INFO] 10.244.0.4:53039 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000056881s
	[INFO] 10.244.0.4:47474 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000052458s
	[INFO] 10.244.1.2:35027 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000113257s
	[INFO] 10.244.1.2:60680 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000087182s
	[INFO] 10.244.1.2:36287 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000080566s
	
	
	==> coredns [bcd7170b050a] <==
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:47813 - 35687 "HINFO IN 8431179596625010309.1478595720938230641. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.009077429s
	[INFO] 10.244.0.4:47786 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132847s
	[INFO] 10.244.0.4:40096 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.001354873s
	[INFO] 10.244.1.2:40884 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000096214s
	[INFO] 10.244.1.2:55655 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000050276s
	[INFO] 10.244.1.2:47690 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000614458s
	[INFO] 10.244.0.4:45344 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000076791s
	[INFO] 10.244.0.4:53101 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.001042655s
	[INFO] 10.244.0.4:43889 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000082483s
	[INFO] 10.244.0.4:59210 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.00074308s
	[INFO] 10.244.0.4:38429 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00010233s
	[INFO] 10.244.0.4:48679 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00007214s
	[INFO] 10.244.1.2:43879 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000062278s
	[INFO] 10.244.1.2:45902 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000100209s
	[INFO] 10.244.1.2:39740 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000071593s
	[INFO] 10.244.0.4:50878 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000077864s
	[INFO] 10.244.0.4:51260 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000078903s
	[INFO] 10.244.0.4:38206 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000049117s
	[INFO] 10.244.1.2:54952 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000071797s
	[INFO] 10.244.1.2:36478 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000063757s
	[INFO] 10.244.0.4:43240 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000066616s
	[INFO] 10.244.0.4:60894 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000052873s
	[INFO] 10.244.1.2:43932 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00008816s
	
	
	==> describe nodes <==
	Name:               ha-286000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-286000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd
	                    minikube.k8s.io/name=ha-286000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_16T10_02_26_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 17:02:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-286000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 17:16:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 17:15:42 +0000   Fri, 16 Aug 2024 17:02:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 17:15:42 +0000   Fri, 16 Aug 2024 17:02:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 17:15:42 +0000   Fri, 16 Aug 2024 17:02:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 17:15:42 +0000   Fri, 16 Aug 2024 17:02:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.5
	  Hostname:    ha-286000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 a56c711da9894c3c86f2a7af9fec2b53
	  System UUID:                ad96408c-0000-0000-89eb-d74e5b68d297
	  Boot ID:                    23e4d4eb-a603-45bf-aca4-fb0893407f5a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-dvmvk              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-6f6b679f8f-2kqjf             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     14m
	  kube-system                 coredns-6f6b679f8f-rfbz7             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     14m
	  kube-system                 etcd-ha-286000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         14m
	  kube-system                 kindnet-whqxb                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      14m
	  kube-system                 kube-apiserver-ha-286000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-ha-286000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-w4nt2                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-ha-286000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-vip-ha-286000                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14m                kube-proxy       
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)  kubelet          Node ha-286000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet          Node ha-286000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)  kubelet          Node ha-286000 status is now: NodeHasSufficientPID
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m                kubelet          Node ha-286000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m                kubelet          Node ha-286000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m                kubelet          Node ha-286000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m                node-controller  Node ha-286000 event: Registered Node ha-286000 in Controller
	  Normal  NodeReady                14m                kubelet          Node ha-286000 status is now: NodeReady
	  Normal  RegisteredNode           13m                node-controller  Node ha-286000 event: Registered Node ha-286000 in Controller
	
	
	Name:               ha-286000-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-286000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd
	                    minikube.k8s.io/name=ha-286000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_16T10_03_22_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 17:03:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-286000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 17:16:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 17:15:35 +0000   Fri, 16 Aug 2024 17:03:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 17:15:35 +0000   Fri, 16 Aug 2024 17:03:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 17:15:35 +0000   Fri, 16 Aug 2024 17:03:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 17:15:35 +0000   Fri, 16 Aug 2024 17:03:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.6
	  Hostname:    ha-286000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 42bf4a1b451f44ad925f50a6a94e4cff
	  System UUID:                f7634935-0000-0000-a751-ac72999da031
	  Boot ID:                    e82d142e-37b9-4938-81bc-f5bc1a2db23f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-k9m92                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ha-286000-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         13m
	  kube-system                 kindnet-t9kjf                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      13m
	  kube-system                 kube-apiserver-ha-286000-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-286000-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-pt669                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-286000-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-286000-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node ha-286000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node ha-286000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node ha-286000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node ha-286000-m02 event: Registered Node ha-286000-m02 in Controller
	  Normal  RegisteredNode           13m                node-controller  Node ha-286000-m02 event: Registered Node ha-286000-m02 in Controller
	
	
	==> dmesg <==
	[  +0.006640] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.760553] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +1.351722] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000003] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Aug16 17:02] systemd-fstab-generator[519]: Ignoring "noauto" option for root device
	[  +0.114039] systemd-fstab-generator[531]: Ignoring "noauto" option for root device
	[  +1.207693] kauditd_printk_skb: 42 callbacks suppressed
	[  +0.611653] systemd-fstab-generator[786]: Ignoring "noauto" option for root device
	[  +0.265175] systemd-fstab-generator[846]: Ignoring "noauto" option for root device
	[  +0.101006] systemd-fstab-generator[858]: Ignoring "noauto" option for root device
	[  +0.123308] systemd-fstab-generator[872]: Ignoring "noauto" option for root device
	[  +2.497195] systemd-fstab-generator[1086]: Ignoring "noauto" option for root device
	[  +0.110299] systemd-fstab-generator[1098]: Ignoring "noauto" option for root device
	[  +0.095670] systemd-fstab-generator[1110]: Ignoring "noauto" option for root device
	[  +0.122299] systemd-fstab-generator[1126]: Ignoring "noauto" option for root device
	[  +3.636333] systemd-fstab-generator[1227]: Ignoring "noauto" option for root device
	[  +0.056190] kauditd_printk_skb: 233 callbacks suppressed
	[  +2.505796] systemd-fstab-generator[1479]: Ignoring "noauto" option for root device
	[  +3.634449] systemd-fstab-generator[1611]: Ignoring "noauto" option for root device
	[  +0.055756] kauditd_printk_skb: 70 callbacks suppressed
	[  +6.910973] systemd-fstab-generator[2107]: Ignoring "noauto" option for root device
	[  +0.079351] kauditd_printk_skb: 72 callbacks suppressed
	[  +9.264773] kauditd_printk_skb: 51 callbacks suppressed
	[Aug16 17:03] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [d8dadff6cec7] <==
	{"level":"info","ts":"2024-08-16T17:03:20.601869Z","caller":"rafthttp/peer.go:133","msg":"starting remote peer","remote-peer-id":"9633c02797b6d34"}
	{"level":"info","ts":"2024-08-16T17:03:20.601961Z","caller":"rafthttp/pipeline.go:72","msg":"started HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"9633c02797b6d34"}
	{"level":"info","ts":"2024-08-16T17:03:20.602332Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"9633c02797b6d34"}
	{"level":"info","ts":"2024-08-16T17:03:20.602493Z","caller":"rafthttp/peer.go:137","msg":"started remote peer","remote-peer-id":"9633c02797b6d34"}
	{"level":"info","ts":"2024-08-16T17:03:20.602718Z","caller":"rafthttp/transport.go:317","msg":"added remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"9633c02797b6d34","remote-peer-urls":["https://192.169.0.6:2380"]}
	{"level":"info","ts":"2024-08-16T17:03:20.606154Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"9633c02797b6d34"}
	{"level":"info","ts":"2024-08-16T17:03:20.606173Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"9633c02797b6d34"}
	{"level":"info","ts":"2024-08-16T17:03:20.606385Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"9633c02797b6d34"}
	{"level":"info","ts":"2024-08-16T17:03:20.607792Z","caller":"etcdserver/server.go:1996","msg":"applied a configuration change through raft","local-member-id":"b8c6c7563d17d844","raft-conf-change":"ConfChangeAddLearnerNode","raft-conf-change-node-id":"9633c02797b6d34"}
	{"level":"info","ts":"2024-08-16T17:03:21.553074Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"9633c02797b6d34"}
	{"level":"info","ts":"2024-08-16T17:03:21.553169Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"9633c02797b6d34"}
	{"level":"info","ts":"2024-08-16T17:03:21.561367Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"9633c02797b6d34","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-08-16T17:03:21.561449Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"9633c02797b6d34"}
	{"level":"info","ts":"2024-08-16T17:03:21.561528Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"9633c02797b6d34"}
	{"level":"info","ts":"2024-08-16T17:03:21.571776Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"9633c02797b6d34","stream-type":"stream Message"}
	{"level":"info","ts":"2024-08-16T17:03:21.571882Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"9633c02797b6d34"}
	{"level":"info","ts":"2024-08-16T17:03:22.121813Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 switched to configuration voters=(676450350361439540 13314548521573537860)"}
	{"level":"info","ts":"2024-08-16T17:03:22.121979Z","caller":"membership/cluster.go:535","msg":"promote member","cluster-id":"b73189effde9bc63","local-member-id":"b8c6c7563d17d844"}
	{"level":"info","ts":"2024-08-16T17:03:22.122011Z","caller":"etcdserver/server.go:1996","msg":"applied a configuration change through raft","local-member-id":"b8c6c7563d17d844","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"9633c02797b6d34"}
	{"level":"warn","ts":"2024-08-16T17:05:03.077229Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.58419ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets/default/busybox-7dff88458\" ","response":"range_response_count:1 size:1985"}
	{"level":"info","ts":"2024-08-16T17:05:03.077410Z","caller":"traceutil/trace.go:171","msg":"trace[51649115] range","detail":"{range_begin:/registry/replicasets/default/busybox-7dff88458; range_end:; response_count:1; response_revision:920; }","duration":"112.799262ms","start":"2024-08-16T17:05:02.964599Z","end":"2024-08-16T17:05:03.077398Z","steps":["trace[51649115] 'agreement among raft nodes before linearized reading'  (duration: 72.831989ms)","trace[51649115] 'range keys from in-memory index tree'  (duration: 39.723026ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-16T17:05:03.078503Z","caller":"traceutil/trace.go:171","msg":"trace[1276232025] transaction","detail":"{read_only:false; response_revision:921; number_of_response:1; }","duration":"116.265354ms","start":"2024-08-16T17:05:02.962223Z","end":"2024-08-16T17:05:03.078489Z","steps":["trace[1276232025] 'process raft request'  (duration: 75.236605ms)","trace[1276232025] 'compare'  (duration: 39.643768ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-16T17:12:20.455094Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1242}
	{"level":"info","ts":"2024-08-16T17:12:20.478439Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1242,"took":"22.921055ms","hash":729855672,"current-db-size-bytes":3137536,"current-db-size":"3.1 MB","current-db-size-in-use-bytes":1589248,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-08-16T17:12:20.478900Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":729855672,"revision":1242,"compact-revision":-1}
	
	
	==> kernel <==
	 17:17:00 up 15 min,  0 users,  load average: 0.21, 0.20, 0.14
	Linux ha-286000 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [2677394dcd66] <==
	I0816 17:15:55.223123       1 main.go:322] Node ha-286000-m02 has CIDR [10.244.1.0/24] 
	I0816 17:16:05.227238       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0816 17:16:05.227482       1 main.go:299] handling current node
	I0816 17:16:05.227602       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0816 17:16:05.227679       1 main.go:322] Node ha-286000-m02 has CIDR [10.244.1.0/24] 
	I0816 17:16:15.225728       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0816 17:16:15.225955       1 main.go:299] handling current node
	I0816 17:16:15.226059       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0816 17:16:15.226166       1 main.go:322] Node ha-286000-m02 has CIDR [10.244.1.0/24] 
	I0816 17:16:25.225541       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0816 17:16:25.225628       1 main.go:299] handling current node
	I0816 17:16:25.225642       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0816 17:16:25.225648       1 main.go:322] Node ha-286000-m02 has CIDR [10.244.1.0/24] 
	I0816 17:16:35.223579       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0816 17:16:35.223618       1 main.go:299] handling current node
	I0816 17:16:35.223629       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0816 17:16:35.223637       1 main.go:322] Node ha-286000-m02 has CIDR [10.244.1.0/24] 
	I0816 17:16:45.222993       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0816 17:16:45.223332       1 main.go:299] handling current node
	I0816 17:16:45.223676       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0816 17:16:45.223947       1 main.go:322] Node ha-286000-m02 has CIDR [10.244.1.0/24] 
	I0816 17:16:55.222892       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0816 17:16:55.223037       1 main.go:299] handling current node
	I0816 17:16:55.223088       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0816 17:16:55.223105       1 main.go:322] Node ha-286000-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [5f3adc202a7a] <==
	I0816 17:02:23.102546       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0816 17:02:23.105482       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0816 17:02:23.105513       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0816 17:02:23.438732       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0816 17:02:23.465745       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0816 17:02:23.506778       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0816 17:02:23.510486       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.5]
	I0816 17:02:23.511071       1 controller.go:615] quota admission added evaluator for: endpoints
	I0816 17:02:23.513769       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0816 17:02:24.114339       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0816 17:02:25.748425       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0816 17:02:25.762462       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0816 17:02:25.770706       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0816 17:02:29.466806       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0816 17:02:29.715365       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0816 17:16:53.377658       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:51358: use of closed network connection
	E0816 17:16:53.575639       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:51360: use of closed network connection
	E0816 17:16:53.910337       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:51365: use of closed network connection
	E0816 17:16:54.106751       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:51367: use of closed network connection
	E0816 17:16:54.415124       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:51372: use of closed network connection
	E0816 17:16:54.604288       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:51374: use of closed network connection
	E0816 17:16:57.856178       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:51405: use of closed network connection
	E0816 17:16:58.061897       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:51407: use of closed network connection
	E0816 17:16:58.252131       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:51409: use of closed network connection
	E0816 17:16:58.440772       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:51411: use of closed network connection
	
	
	==> kube-controller-manager [8f5867ee99d9] <==
	I0816 17:03:28.116114       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000-m02"
	I0816 17:03:28.213864       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000-m02"
	I0816 17:03:30.558882       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000-m02"
	I0816 17:03:39.011535       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000-m02"
	I0816 17:03:39.020670       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000-m02"
	I0816 17:03:43.141912       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000-m02"
	I0816 17:03:50.726543       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000-m02"
	I0816 17:05:02.843343       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="49.61706ms"
	I0816 17:05:02.889189       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="45.792718ms"
	I0816 17:05:02.932283       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="43.030665ms"
	I0816 17:05:03.088994       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="156.603755ms"
	I0816 17:05:03.131864       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="42.618017ms"
	I0816 17:05:03.132076       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="38.426µs"
	I0816 17:05:03.151764       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="18.908957ms"
	I0816 17:05:03.156449       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="69.081µs"
	I0816 17:05:04.790508       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="5.652445ms"
	I0816 17:05:04.790798       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="31.186µs"
	I0816 17:05:05.070211       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="30.141421ms"
	I0816 17:05:05.070269       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="20.554µs"
	I0816 17:05:22.202209       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000-m02"
	I0816 17:05:29.027322       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000"
	I0816 17:10:29.521403       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000-m02"
	I0816 17:10:35.744951       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000"
	I0816 17:15:35.199030       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000-m02"
	I0816 17:15:42.092912       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000"
	
	
	==> kube-proxy [81f6c96d4649] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0816 17:02:30.214569       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0816 17:02:30.222978       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E0816 17:02:30.223011       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0816 17:02:30.267797       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0816 17:02:30.267825       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0816 17:02:30.267843       1 server_linux.go:169] "Using iptables Proxier"
	I0816 17:02:30.270154       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0816 17:02:30.270311       1 server.go:483] "Version info" version="v1.31.0"
	I0816 17:02:30.270319       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 17:02:30.271352       1 config.go:197] "Starting service config controller"
	I0816 17:02:30.271358       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0816 17:02:30.271372       1 config.go:104] "Starting endpoint slice config controller"
	I0816 17:02:30.271375       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0816 17:02:30.271598       1 config.go:326] "Starting node config controller"
	I0816 17:02:30.271603       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0816 17:02:30.371824       1 shared_informer.go:320] Caches are synced for node config
	I0816 17:02:30.371859       1 shared_informer.go:320] Caches are synced for service config
	I0816 17:02:30.371867       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [f7b2e9efdd94] <==
	W0816 17:02:22.176847       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0816 17:02:22.176852       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 17:02:22.176967       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0816 17:02:22.176999       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0816 17:02:22.177013       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0816 17:02:22.179079       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0816 17:02:22.179253       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0816 17:02:22.179322       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0816 17:02:23.072963       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0816 17:02:23.073256       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 17:02:23.081000       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0816 17:02:23.081176       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 17:02:23.142220       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0816 17:02:23.142263       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 17:02:23.166962       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0816 17:02:23.167003       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 17:02:23.255186       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 17:02:23.255230       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 17:02:23.456273       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0816 17:02:23.456447       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0816 17:02:26.460413       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0816 17:05:02.827326       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-k9m92\": pod busybox-7dff88458-k9m92 is already assigned to node \"ha-286000-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-k9m92" node="ha-286000-m02"
	E0816 17:05:02.827387       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod a516859c-0da8-4ba9-a896-ac720495818f(default/busybox-7dff88458-k9m92) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-k9m92"
	E0816 17:05:02.827403       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-k9m92\": pod busybox-7dff88458-k9m92 is already assigned to node \"ha-286000-m02\"" pod="default/busybox-7dff88458-k9m92"
	I0816 17:05:02.827520       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-k9m92" node="ha-286000-m02"
	
	
	==> kubelet <==
	Aug 16 17:12:25 ha-286000 kubelet[2114]: E0816 17:12:25.669484    2114 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 16 17:12:25 ha-286000 kubelet[2114]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 16 17:12:25 ha-286000 kubelet[2114]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 16 17:12:25 ha-286000 kubelet[2114]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 16 17:12:25 ha-286000 kubelet[2114]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 16 17:13:25 ha-286000 kubelet[2114]: E0816 17:13:25.672677    2114 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 16 17:13:25 ha-286000 kubelet[2114]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 16 17:13:25 ha-286000 kubelet[2114]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 16 17:13:25 ha-286000 kubelet[2114]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 16 17:13:25 ha-286000 kubelet[2114]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 16 17:14:25 ha-286000 kubelet[2114]: E0816 17:14:25.670434    2114 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 16 17:14:25 ha-286000 kubelet[2114]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 16 17:14:25 ha-286000 kubelet[2114]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 16 17:14:25 ha-286000 kubelet[2114]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 16 17:14:25 ha-286000 kubelet[2114]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 16 17:15:25 ha-286000 kubelet[2114]: E0816 17:15:25.670848    2114 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 16 17:15:25 ha-286000 kubelet[2114]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 16 17:15:25 ha-286000 kubelet[2114]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 16 17:15:25 ha-286000 kubelet[2114]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 16 17:15:25 ha-286000 kubelet[2114]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 16 17:16:25 ha-286000 kubelet[2114]: E0816 17:16:25.672451    2114 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 16 17:16:25 ha-286000 kubelet[2114]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 16 17:16:25 ha-286000 kubelet[2114]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 16 17:16:25 ha-286000 kubelet[2114]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 16 17:16:25 ha-286000 kubelet[2114]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-286000 -n ha-286000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-286000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-7dff88458-99xmp
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-286000 describe pod busybox-7dff88458-99xmp
helpers_test.go:282: (dbg) kubectl --context ha-286000 describe pod busybox-7dff88458-99xmp:

                                                
                                                
-- stdout --
	Name:             busybox-7dff88458-99xmp
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7dff88458
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7dff88458
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wxcs4 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-wxcs4:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                  From               Message
	  ----     ------            ----                 ----               -------
	  Warning  FailedScheduling  11m                  default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  96s (x2 over 6m36s)  default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  96s (x3 over 11m)    default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (3.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (51.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 node add -p ha-286000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-darwin-amd64 node add -p ha-286000 -v=7 --alsologtostderr: (47.731063634s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-amd64 -p ha-286000 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-286000 status -v=7 --alsologtostderr: exit status 2 (452.717678ms)

                                                
                                                
-- stdout --
	ha-286000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-286000-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-286000-m03
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-286000-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 10:17:48.805420    4259 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:17:48.805872    4259 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:17:48.805879    4259 out.go:358] Setting ErrFile to fd 2...
	I0816 10:17:48.805882    4259 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:17:48.806069    4259 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1276/.minikube/bin
	I0816 10:17:48.806250    4259 out.go:352] Setting JSON to false
	I0816 10:17:48.806272    4259 mustload.go:65] Loading cluster: ha-286000
	I0816 10:17:48.806309    4259 notify.go:220] Checking for updates...
	I0816 10:17:48.806588    4259 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:17:48.806603    4259 status.go:255] checking status of ha-286000 ...
	I0816 10:17:48.806958    4259 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:17:48.807013    4259 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:17:48.816153    4259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51483
	I0816 10:17:48.816498    4259 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:17:48.816941    4259 main.go:141] libmachine: Using API Version  1
	I0816 10:17:48.816957    4259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:17:48.817163    4259 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:17:48.817284    4259 main.go:141] libmachine: (ha-286000) Calling .GetState
	I0816 10:17:48.817398    4259 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:17:48.817463    4259 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:17:48.818427    4259 status.go:330] ha-286000 host status = "Running" (err=<nil>)
	I0816 10:17:48.818448    4259 host.go:66] Checking if "ha-286000" exists ...
	I0816 10:17:48.818688    4259 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:17:48.818711    4259 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:17:48.827349    4259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51485
	I0816 10:17:48.827695    4259 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:17:48.828113    4259 main.go:141] libmachine: Using API Version  1
	I0816 10:17:48.828152    4259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:17:48.828374    4259 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:17:48.828476    4259 main.go:141] libmachine: (ha-286000) Calling .GetIP
	I0816 10:17:48.828554    4259 host.go:66] Checking if "ha-286000" exists ...
	I0816 10:17:48.828803    4259 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:17:48.828826    4259 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:17:48.838750    4259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51487
	I0816 10:17:48.839095    4259 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:17:48.839408    4259 main.go:141] libmachine: Using API Version  1
	I0816 10:17:48.839420    4259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:17:48.839630    4259 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:17:48.839737    4259 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:17:48.839874    4259 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 10:17:48.839896    4259 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:17:48.840001    4259 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:17:48.840077    4259 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:17:48.840147    4259 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:17:48.840230    4259 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:17:48.874808    4259 ssh_runner.go:195] Run: systemctl --version
	I0816 10:17:48.879215    4259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 10:17:48.890782    4259 kubeconfig.go:125] found "ha-286000" server: "https://192.169.0.254:8443"
	I0816 10:17:48.890806    4259 api_server.go:166] Checking apiserver status ...
	I0816 10:17:48.890849    4259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 10:17:48.902245    4259 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1897/cgroup
	W0816 10:17:48.910201    4259 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1897/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0816 10:17:48.910246    4259 ssh_runner.go:195] Run: ls
	I0816 10:17:48.913424    4259 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0816 10:17:48.916593    4259 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0816 10:17:48.916603    4259 status.go:422] ha-286000 apiserver status = Running (err=<nil>)
	I0816 10:17:48.916619    4259 status.go:257] ha-286000 status: &{Name:ha-286000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 10:17:48.916630    4259 status.go:255] checking status of ha-286000-m02 ...
	I0816 10:17:48.916882    4259 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:17:48.916902    4259 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:17:48.925614    4259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51491
	I0816 10:17:48.925938    4259 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:17:48.926255    4259 main.go:141] libmachine: Using API Version  1
	I0816 10:17:48.926265    4259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:17:48.926484    4259 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:17:48.926591    4259 main.go:141] libmachine: (ha-286000-m02) Calling .GetState
	I0816 10:17:48.926665    4259 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:17:48.926740    4259 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid from json: 3806
	I0816 10:17:48.927688    4259 status.go:330] ha-286000-m02 host status = "Running" (err=<nil>)
	I0816 10:17:48.927694    4259 host.go:66] Checking if "ha-286000-m02" exists ...
	I0816 10:17:48.927950    4259 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:17:48.927970    4259 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:17:48.936347    4259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51493
	I0816 10:17:48.936679    4259 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:17:48.937008    4259 main.go:141] libmachine: Using API Version  1
	I0816 10:17:48.937027    4259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:17:48.937223    4259 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:17:48.937327    4259 main.go:141] libmachine: (ha-286000-m02) Calling .GetIP
	I0816 10:17:48.937410    4259 host.go:66] Checking if "ha-286000-m02" exists ...
	I0816 10:17:48.937664    4259 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:17:48.937687    4259 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:17:48.946361    4259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51495
	I0816 10:17:48.946730    4259 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:17:48.947073    4259 main.go:141] libmachine: Using API Version  1
	I0816 10:17:48.947086    4259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:17:48.947306    4259 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:17:48.947406    4259 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:17:48.947532    4259 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 10:17:48.947545    4259 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:17:48.947608    4259 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:17:48.947685    4259 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:17:48.947767    4259 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:17:48.947850    4259 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/id_rsa Username:docker}
	I0816 10:17:48.992799    4259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 10:17:49.007816    4259 kubeconfig.go:125] found "ha-286000" server: "https://192.169.0.254:8443"
	I0816 10:17:49.007836    4259 api_server.go:166] Checking apiserver status ...
	I0816 10:17:49.007874    4259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 10:17:49.019121    4259 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1932/cgroup
	W0816 10:17:49.027544    4259 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1932/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0816 10:17:49.027596    4259 ssh_runner.go:195] Run: ls
	I0816 10:17:49.030763    4259 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0816 10:17:49.033853    4259 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0816 10:17:49.033864    4259 status.go:422] ha-286000-m02 apiserver status = Running (err=<nil>)
	I0816 10:17:49.033872    4259 status.go:257] ha-286000-m02 status: &{Name:ha-286000-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 10:17:49.033883    4259 status.go:255] checking status of ha-286000-m03 ...
	I0816 10:17:49.034132    4259 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:17:49.034152    4259 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:17:49.042939    4259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51499
	I0816 10:17:49.043299    4259 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:17:49.043622    4259 main.go:141] libmachine: Using API Version  1
	I0816 10:17:49.043633    4259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:17:49.043864    4259 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:17:49.044000    4259 main.go:141] libmachine: (ha-286000-m03) Calling .GetState
	I0816 10:17:49.044092    4259 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:17:49.044190    4259 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid from json: 3849
	I0816 10:17:49.045158    4259 status.go:330] ha-286000-m03 host status = "Running" (err=<nil>)
	I0816 10:17:49.045168    4259 host.go:66] Checking if "ha-286000-m03" exists ...
	I0816 10:17:49.045430    4259 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:17:49.045469    4259 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:17:49.054106    4259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51501
	I0816 10:17:49.054453    4259 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:17:49.054786    4259 main.go:141] libmachine: Using API Version  1
	I0816 10:17:49.054796    4259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:17:49.055022    4259 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:17:49.055127    4259 main.go:141] libmachine: (ha-286000-m03) Calling .GetIP
	I0816 10:17:49.055224    4259 host.go:66] Checking if "ha-286000-m03" exists ...
	I0816 10:17:49.055480    4259 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:17:49.055511    4259 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:17:49.064156    4259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51503
	I0816 10:17:49.064499    4259 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:17:49.064822    4259 main.go:141] libmachine: Using API Version  1
	I0816 10:17:49.064831    4259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:17:49.065040    4259 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:17:49.065135    4259 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:17:49.065267    4259 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 10:17:49.065278    4259 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:17:49.065356    4259 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:17:49.065436    4259 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:17:49.065516    4259 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:17:49.065601    4259 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/id_rsa Username:docker}
	I0816 10:17:49.101850    4259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 10:17:49.113804    4259 kubeconfig.go:125] found "ha-286000" server: "https://192.169.0.254:8443"
	I0816 10:17:49.113818    4259 api_server.go:166] Checking apiserver status ...
	I0816 10:17:49.113859    4259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 10:17:49.124405    4259 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 10:17:49.124416    4259 status.go:422] ha-286000-m03 apiserver status = Stopped (err=<nil>)
	I0816 10:17:49.124425    4259 status.go:257] ha-286000-m03 status: &{Name:ha-286000-m03 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 10:17:49.124434    4259 status.go:255] checking status of ha-286000-m04 ...
	I0816 10:17:49.124706    4259 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:17:49.124728    4259 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:17:49.133380    4259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51506
	I0816 10:17:49.133712    4259 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:17:49.134078    4259 main.go:141] libmachine: Using API Version  1
	I0816 10:17:49.134096    4259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:17:49.134315    4259 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:17:49.134431    4259 main.go:141] libmachine: (ha-286000-m04) Calling .GetState
	I0816 10:17:49.134517    4259 main.go:141] libmachine: (ha-286000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:17:49.134586    4259 main.go:141] libmachine: (ha-286000-m04) DBG | hyperkit pid from json: 4248
	I0816 10:17:49.135557    4259 status.go:330] ha-286000-m04 host status = "Running" (err=<nil>)
	I0816 10:17:49.135566    4259 host.go:66] Checking if "ha-286000-m04" exists ...
	I0816 10:17:49.135828    4259 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:17:49.135850    4259 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:17:49.144676    4259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51508
	I0816 10:17:49.145017    4259 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:17:49.145373    4259 main.go:141] libmachine: Using API Version  1
	I0816 10:17:49.145392    4259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:17:49.145592    4259 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:17:49.145705    4259 main.go:141] libmachine: (ha-286000-m04) Calling .GetIP
	I0816 10:17:49.145795    4259 host.go:66] Checking if "ha-286000-m04" exists ...
	I0816 10:17:49.146035    4259 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:17:49.146057    4259 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:17:49.154972    4259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51510
	I0816 10:17:49.155301    4259 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:17:49.155651    4259 main.go:141] libmachine: Using API Version  1
	I0816 10:17:49.155662    4259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:17:49.155861    4259 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:17:49.155978    4259 main.go:141] libmachine: (ha-286000-m04) Calling .DriverName
	I0816 10:17:49.156096    4259 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 10:17:49.156107    4259 main.go:141] libmachine: (ha-286000-m04) Calling .GetSSHHostname
	I0816 10:17:49.156183    4259 main.go:141] libmachine: (ha-286000-m04) Calling .GetSSHPort
	I0816 10:17:49.156264    4259 main.go:141] libmachine: (ha-286000-m04) Calling .GetSSHKeyPath
	I0816 10:17:49.156343    4259 main.go:141] libmachine: (ha-286000-m04) Calling .GetSSHUsername
	I0816 10:17:49.156438    4259 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m04/id_rsa Username:docker}
	I0816 10:17:49.188712    4259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 10:17:49.199077    4259 status.go:257] ha-286000-m04 status: &{Name:ha-286000-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:236: failed to run minikube status. args "out/minikube-darwin-amd64 -p ha-286000 status -v=7 --alsologtostderr" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-286000 -n ha-286000
helpers_test.go:244: <<< TestMultiControlPlane/serial/AddWorkerNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/AddWorkerNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-286000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-286000 logs -n 25: (2.406468732s)
helpers_test.go:252: TestMultiControlPlane/serial/AddWorkerNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| kubectl | -p ha-286000 -- get pods -o          | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:15 PDT | 16 Aug 24 10:15 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- get pods -o          | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:15 PDT | 16 Aug 24 10:15 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- get pods -o          | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:15 PDT | 16 Aug 24 10:15 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- get pods -o          | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:15 PDT | 16 Aug 24 10:15 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- get pods -o          | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:15 PDT | 16 Aug 24 10:15 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- get pods -o          | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:15 PDT | 16 Aug 24 10:15 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- get pods -o          | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- get pods -o          | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- get pods -o          | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT |                     |
	|         | busybox-7dff88458-99xmp --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-dvmvk --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-k9m92 --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT |                     |
	|         | busybox-7dff88458-99xmp --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-dvmvk --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-k9m92 --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT |                     |
	|         | busybox-7dff88458-99xmp -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-dvmvk -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-k9m92 -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- get pods -o          | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT |                     |
	|         | busybox-7dff88458-99xmp              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-dvmvk              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-dvmvk -- sh        |           |         |         |                     |                     |
	|         | -c ping -c 1 192.169.0.1             |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-k9m92              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-k9m92 -- sh        |           |         |         |                     |                     |
	|         | -c ping -c 1 192.169.0.1             |           |         |         |                     |                     |
	| node    | add -p ha-286000 -v=7                | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:17 PDT | 16 Aug 24 10:17 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 10:01:48
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 10:01:48.113296    3758 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:01:48.113462    3758 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:01:48.113467    3758 out.go:358] Setting ErrFile to fd 2...
	I0816 10:01:48.113470    3758 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:01:48.113648    3758 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1276/.minikube/bin
	I0816 10:01:48.115192    3758 out.go:352] Setting JSON to false
	I0816 10:01:48.140204    3758 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":1878,"bootTime":1723825830,"procs":433,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0816 10:01:48.140316    3758 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 10:01:48.194498    3758 out.go:177] * [ha-286000] minikube v1.33.1 on Darwin 14.6.1
	I0816 10:01:48.236650    3758 notify.go:220] Checking for updates...
	I0816 10:01:48.262360    3758 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 10:01:48.320415    3758 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1276/kubeconfig
	I0816 10:01:48.381368    3758 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0816 10:01:48.452505    3758 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 10:01:48.474398    3758 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1276/.minikube
	I0816 10:01:48.495459    3758 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 10:01:48.520120    3758 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 10:01:48.550379    3758 out.go:177] * Using the hyperkit driver based on user configuration
	I0816 10:01:48.592331    3758 start.go:297] selected driver: hyperkit
	I0816 10:01:48.592358    3758 start.go:901] validating driver "hyperkit" against <nil>
	I0816 10:01:48.592382    3758 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 10:01:48.596757    3758 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:01:48.596907    3758 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19461-1276/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0816 10:01:48.605545    3758 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0816 10:01:48.609748    3758 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:01:48.609772    3758 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0816 10:01:48.609805    3758 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 10:01:48.610046    3758 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 10:01:48.610094    3758 cni.go:84] Creating CNI manager for ""
	I0816 10:01:48.610103    3758 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0816 10:01:48.610108    3758 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0816 10:01:48.610236    3758 start.go:340] cluster config:
	{Name:ha-286000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-286000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docke
r CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0816 10:01:48.610323    3758 iso.go:125] acquiring lock: {Name:mkb430b43b5e7a8a683454482519837ed996c78d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:01:48.632334    3758 out.go:177] * Starting "ha-286000" primary control-plane node in "ha-286000" cluster
	I0816 10:01:48.653456    3758 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 10:01:48.653537    3758 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0816 10:01:48.653563    3758 cache.go:56] Caching tarball of preloaded images
	I0816 10:01:48.653777    3758 preload.go:172] Found /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0816 10:01:48.653813    3758 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 10:01:48.654332    3758 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:01:48.654370    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json: {Name:mk79fdae2e45cdfc987caaf16c5f211150e90dc5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:01:48.654995    3758 start.go:360] acquireMachinesLock for ha-286000: {Name:mk0510f0432b1e24fd07d899155e85dff8756d5a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:01:48.655106    3758 start.go:364] duration metric: took 87.458µs to acquireMachinesLock for "ha-286000"
	I0816 10:01:48.655154    3758 start.go:93] Provisioning new machine with config: &{Name:ha-286000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:ha-286000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:01:48.655246    3758 start.go:125] createHost starting for "" (driver="hyperkit")
	I0816 10:01:48.677450    3758 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0816 10:01:48.677701    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:01:48.677786    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:01:48.687593    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51004
	I0816 10:01:48.687935    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:01:48.688347    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:01:48.688357    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:01:48.688562    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:01:48.688668    3758 main.go:141] libmachine: (ha-286000) Calling .GetMachineName
	I0816 10:01:48.688765    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:01:48.688895    3758 start.go:159] libmachine.API.Create for "ha-286000" (driver="hyperkit")
	I0816 10:01:48.688916    3758 client.go:168] LocalClient.Create starting
	I0816 10:01:48.688948    3758 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem
	I0816 10:01:48.688999    3758 main.go:141] libmachine: Decoding PEM data...
	I0816 10:01:48.689012    3758 main.go:141] libmachine: Parsing certificate...
	I0816 10:01:48.689063    3758 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem
	I0816 10:01:48.689100    3758 main.go:141] libmachine: Decoding PEM data...
	I0816 10:01:48.689111    3758 main.go:141] libmachine: Parsing certificate...
	I0816 10:01:48.689128    3758 main.go:141] libmachine: Running pre-create checks...
	I0816 10:01:48.689135    3758 main.go:141] libmachine: (ha-286000) Calling .PreCreateCheck
	I0816 10:01:48.689224    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:48.689427    3758 main.go:141] libmachine: (ha-286000) Calling .GetConfigRaw
	I0816 10:01:48.698771    3758 main.go:141] libmachine: Creating machine...
	I0816 10:01:48.698794    3758 main.go:141] libmachine: (ha-286000) Calling .Create
	I0816 10:01:48.699018    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:48.699296    3758 main.go:141] libmachine: (ha-286000) DBG | I0816 10:01:48.699015    3766 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19461-1276/.minikube
	I0816 10:01:48.699450    3758 main.go:141] libmachine: (ha-286000) Downloading /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19461-1276/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0816 10:01:48.884950    3758 main.go:141] libmachine: (ha-286000) DBG | I0816 10:01:48.884833    3766 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa...
	I0816 10:01:48.986348    3758 main.go:141] libmachine: (ha-286000) DBG | I0816 10:01:48.986275    3766 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/ha-286000.rawdisk...
	I0816 10:01:48.986388    3758 main.go:141] libmachine: (ha-286000) DBG | Writing magic tar header
	I0816 10:01:48.986396    3758 main.go:141] libmachine: (ha-286000) DBG | Writing SSH key tar header
	I0816 10:01:48.987038    3758 main.go:141] libmachine: (ha-286000) DBG | I0816 10:01:48.987004    3766 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000 ...
	I0816 10:01:49.360700    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:49.360726    3758 main.go:141] libmachine: (ha-286000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/hyperkit.pid
	I0816 10:01:49.360741    3758 main.go:141] libmachine: (ha-286000) DBG | Using UUID ad96de67-e238-408c-89eb-d74e5b68d297
	I0816 10:01:49.471852    3758 main.go:141] libmachine: (ha-286000) DBG | Generated MAC 66:c8:48:4e:12:1b
	I0816 10:01:49.471873    3758 main.go:141] libmachine: (ha-286000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000
	I0816 10:01:49.471908    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"ad96de67-e238-408c-89eb-d74e5b68d297", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0000b2330)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 10:01:49.471951    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"ad96de67-e238-408c-89eb-d74e5b68d297", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0000b2330)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 10:01:49.471996    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "ad96de67-e238-408c-89eb-d74e5b68d297", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/ha-286000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/bzimage,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000"}
	I0816 10:01:49.472043    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U ad96de67-e238-408c-89eb-d74e5b68d297 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/ha-286000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/console-ring -f kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/bzimage,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000"
	I0816 10:01:49.472063    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0816 10:01:49.475179    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 DEBUG: hyperkit: Pid is 3771
	I0816 10:01:49.475583    3758 main.go:141] libmachine: (ha-286000) DBG | Attempt 0
	I0816 10:01:49.475597    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:49.475674    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:01:49.476510    3758 main.go:141] libmachine: (ha-286000) DBG | Searching for 66:c8:48:4e:12:1b in /var/db/dhcpd_leases ...
	I0816 10:01:49.476583    3758 main.go:141] libmachine: (ha-286000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0816 10:01:49.476592    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:01:49.476602    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:01:49.476612    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:01:49.482826    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0816 10:01:49.534430    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0816 10:01:49.535040    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:01:49.535056    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:01:49.535064    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:01:49.535072    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:01:49.909510    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0816 10:01:49.909527    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0816 10:01:50.024439    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:01:50.024459    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:01:50.024479    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:01:50.024494    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:01:50.025363    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:50 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0816 10:01:50.025375    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:50 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0816 10:01:51.477573    3758 main.go:141] libmachine: (ha-286000) DBG | Attempt 1
	I0816 10:01:51.477587    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:51.477660    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:01:51.478481    3758 main.go:141] libmachine: (ha-286000) DBG | Searching for 66:c8:48:4e:12:1b in /var/db/dhcpd_leases ...
	I0816 10:01:51.478504    3758 main.go:141] libmachine: (ha-286000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0816 10:01:51.478511    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:01:51.478520    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:01:51.478531    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:01:53.479582    3758 main.go:141] libmachine: (ha-286000) DBG | Attempt 2
	I0816 10:01:53.479599    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:53.479760    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:01:53.480630    3758 main.go:141] libmachine: (ha-286000) DBG | Searching for 66:c8:48:4e:12:1b in /var/db/dhcpd_leases ...
	I0816 10:01:53.480678    3758 main.go:141] libmachine: (ha-286000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0816 10:01:53.480688    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:01:53.480697    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:01:53.480710    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:01:55.482614    3758 main.go:141] libmachine: (ha-286000) DBG | Attempt 3
	I0816 10:01:55.482630    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:55.482703    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:01:55.483616    3758 main.go:141] libmachine: (ha-286000) DBG | Searching for 66:c8:48:4e:12:1b in /var/db/dhcpd_leases ...
	I0816 10:01:55.483662    3758 main.go:141] libmachine: (ha-286000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0816 10:01:55.483672    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:01:55.483679    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:01:55.483687    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:01:55.581983    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:55 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0816 10:01:55.582063    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:55 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0816 10:01:55.582075    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:55 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0816 10:01:55.606414    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:55 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0816 10:01:57.484306    3758 main.go:141] libmachine: (ha-286000) DBG | Attempt 4
	I0816 10:01:57.484322    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:57.484414    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:01:57.485196    3758 main.go:141] libmachine: (ha-286000) DBG | Searching for 66:c8:48:4e:12:1b in /var/db/dhcpd_leases ...
	I0816 10:01:57.485261    3758 main.go:141] libmachine: (ha-286000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0816 10:01:57.485273    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:01:57.485286    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:01:57.485294    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:01:59.485978    3758 main.go:141] libmachine: (ha-286000) DBG | Attempt 5
	I0816 10:01:59.486008    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:59.486192    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:01:59.487610    3758 main.go:141] libmachine: (ha-286000) DBG | Searching for 66:c8:48:4e:12:1b in /var/db/dhcpd_leases ...
	I0816 10:01:59.487709    3758 main.go:141] libmachine: (ha-286000) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0816 10:01:59.487730    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:01:59.487784    3758 main.go:141] libmachine: (ha-286000) DBG | Found match: 66:c8:48:4e:12:1b
	I0816 10:01:59.487797    3758 main.go:141] libmachine: (ha-286000) DBG | IP: 192.169.0.5
	I0816 10:01:59.487831    3758 main.go:141] libmachine: (ha-286000) Calling .GetConfigRaw
	I0816 10:01:59.488860    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:01:59.489069    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:01:59.489276    3758 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0816 10:01:59.489289    3758 main.go:141] libmachine: (ha-286000) Calling .GetState
	I0816 10:01:59.489463    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:59.489567    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:01:59.490615    3758 main.go:141] libmachine: Detecting operating system of created instance...
	I0816 10:01:59.490628    3758 main.go:141] libmachine: Waiting for SSH to be available...
	I0816 10:01:59.490644    3758 main.go:141] libmachine: Getting to WaitForSSH function...
	I0816 10:01:59.490652    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:01:59.490782    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:01:59.490923    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:01:59.491055    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:01:59.491190    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:01:59.491362    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:01:59.491595    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:01:59.491605    3758 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0816 10:01:59.511220    3758 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0816 10:02:02.573095    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 10:02:02.573108    3758 main.go:141] libmachine: Detecting the provisioner...
	I0816 10:02:02.573113    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:02.573255    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:02.573385    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:02.573489    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:02.573602    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:02.573753    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:02.573906    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:02:02.573914    3758 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0816 10:02:02.632510    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0816 10:02:02.632561    3758 main.go:141] libmachine: found compatible host: buildroot
	I0816 10:02:02.632568    3758 main.go:141] libmachine: Provisioning with buildroot...
	I0816 10:02:02.632573    3758 main.go:141] libmachine: (ha-286000) Calling .GetMachineName
	I0816 10:02:02.632721    3758 buildroot.go:166] provisioning hostname "ha-286000"
	I0816 10:02:02.632732    3758 main.go:141] libmachine: (ha-286000) Calling .GetMachineName
	I0816 10:02:02.632834    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:02.632956    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:02.633046    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:02.633122    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:02.633236    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:02.633363    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:02.633492    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:02:02.633500    3758 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-286000 && echo "ha-286000" | sudo tee /etc/hostname
	I0816 10:02:02.702336    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-286000
	
	I0816 10:02:02.702354    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:02.702482    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:02.702569    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:02.702670    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:02.702756    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:02.702893    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:02.703040    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:02:02.703052    3758 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-286000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-286000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-286000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 10:02:02.766997    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 10:02:02.767016    3758 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19461-1276/.minikube CaCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19461-1276/.minikube}
	I0816 10:02:02.767026    3758 buildroot.go:174] setting up certificates
	I0816 10:02:02.767038    3758 provision.go:84] configureAuth start
	I0816 10:02:02.767044    3758 main.go:141] libmachine: (ha-286000) Calling .GetMachineName
	I0816 10:02:02.767175    3758 main.go:141] libmachine: (ha-286000) Calling .GetIP
	I0816 10:02:02.767270    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:02.767372    3758 provision.go:143] copyHostCerts
	I0816 10:02:02.767406    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem
	I0816 10:02:02.767476    3758 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem, removing ...
	I0816 10:02:02.767485    3758 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem
	I0816 10:02:02.767630    3758 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem (1679 bytes)
	I0816 10:02:02.767837    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem
	I0816 10:02:02.767868    3758 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem, removing ...
	I0816 10:02:02.767873    3758 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem
	I0816 10:02:02.767991    3758 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem (1078 bytes)
	I0816 10:02:02.768146    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem
	I0816 10:02:02.768179    3758 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem, removing ...
	I0816 10:02:02.768184    3758 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem
	I0816 10:02:02.768268    3758 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem (1123 bytes)
	I0816 10:02:02.768410    3758 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem org=jenkins.ha-286000 san=[127.0.0.1 192.169.0.5 ha-286000 localhost minikube]
	I0816 10:02:02.915225    3758 provision.go:177] copyRemoteCerts
	I0816 10:02:02.915279    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 10:02:02.915299    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:02.915444    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:02.915558    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:02.915640    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:02.915723    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:02:02.953069    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0816 10:02:02.953145    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0816 10:02:02.973516    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0816 10:02:02.973600    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0816 10:02:02.993038    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0816 10:02:02.993100    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 10:02:03.013565    3758 provision.go:87] duration metric: took 246.514857ms to configureAuth
	I0816 10:02:03.013581    3758 buildroot.go:189] setting minikube options for container-runtime
	I0816 10:02:03.013724    3758 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:02:03.013737    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:03.013889    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:03.013978    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:03.014067    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:03.014147    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:03.014250    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:03.014369    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:03.014498    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:02:03.014506    3758 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0816 10:02:03.073645    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0816 10:02:03.073660    3758 buildroot.go:70] root file system type: tmpfs
	I0816 10:02:03.073734    3758 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0816 10:02:03.073745    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:03.073872    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:03.073966    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:03.074063    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:03.074164    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:03.074292    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:03.074429    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:02:03.074479    3758 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0816 10:02:03.148891    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0816 10:02:03.148912    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:03.149052    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:03.149138    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:03.149235    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:03.149324    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:03.149466    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:03.149604    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:02:03.149616    3758 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0816 10:02:04.780498    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0816 10:02:04.780519    3758 main.go:141] libmachine: Checking connection to Docker...
	I0816 10:02:04.780526    3758 main.go:141] libmachine: (ha-286000) Calling .GetURL
	I0816 10:02:04.780691    3758 main.go:141] libmachine: Docker is up and running!
	I0816 10:02:04.780698    3758 main.go:141] libmachine: Reticulating splines...
	I0816 10:02:04.780702    3758 client.go:171] duration metric: took 16.091876944s to LocalClient.Create
	I0816 10:02:04.780713    3758 start.go:167] duration metric: took 16.091916939s to libmachine.API.Create "ha-286000"
	I0816 10:02:04.780722    3758 start.go:293] postStartSetup for "ha-286000" (driver="hyperkit")
	I0816 10:02:04.780729    3758 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 10:02:04.780738    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:04.780873    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 10:02:04.780884    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:04.780956    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:04.781047    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:04.781154    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:04.781275    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:02:04.819650    3758 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 10:02:04.823314    3758 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 10:02:04.823335    3758 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19461-1276/.minikube/addons for local assets ...
	I0816 10:02:04.823443    3758 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19461-1276/.minikube/files for local assets ...
	I0816 10:02:04.823624    3758 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> 18312.pem in /etc/ssl/certs
	I0816 10:02:04.823631    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> /etc/ssl/certs/18312.pem
	I0816 10:02:04.823837    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 10:02:04.831860    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem --> /etc/ssl/certs/18312.pem (1708 bytes)
	I0816 10:02:04.850901    3758 start.go:296] duration metric: took 70.172878ms for postStartSetup
	I0816 10:02:04.850935    3758 main.go:141] libmachine: (ha-286000) Calling .GetConfigRaw
	I0816 10:02:04.851561    3758 main.go:141] libmachine: (ha-286000) Calling .GetIP
	I0816 10:02:04.851702    3758 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:02:04.852053    3758 start.go:128] duration metric: took 16.196888252s to createHost
	I0816 10:02:04.852071    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:04.852167    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:04.852261    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:04.852344    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:04.852420    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:04.852525    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:04.852648    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:02:04.852655    3758 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 10:02:04.911299    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723827724.986438448
	
	I0816 10:02:04.911317    3758 fix.go:216] guest clock: 1723827724.986438448
	I0816 10:02:04.911322    3758 fix.go:229] Guest: 2024-08-16 10:02:04.986438448 -0700 PDT Remote: 2024-08-16 10:02:04.852061 -0700 PDT m=+16.776125584 (delta=134.377448ms)
	I0816 10:02:04.911341    3758 fix.go:200] guest clock delta is within tolerance: 134.377448ms
	I0816 10:02:04.911345    3758 start.go:83] releasing machines lock for "ha-286000", held for 16.256325696s
	I0816 10:02:04.911364    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:04.911498    3758 main.go:141] libmachine: (ha-286000) Calling .GetIP
	I0816 10:02:04.911592    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:04.911942    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:04.912068    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:04.912144    3758 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 10:02:04.912171    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:04.912218    3758 ssh_runner.go:195] Run: cat /version.json
	I0816 10:02:04.912231    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:04.912262    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:04.912305    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:04.912360    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:04.912384    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:04.912437    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:04.912464    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:04.912547    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:02:04.912567    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:02:04.945688    3758 ssh_runner.go:195] Run: systemctl --version
	I0816 10:02:04.997158    3758 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 10:02:05.002278    3758 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 10:02:05.002315    3758 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 10:02:05.015089    3758 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 10:02:05.015098    3758 start.go:495] detecting cgroup driver to use...
	I0816 10:02:05.015195    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 10:02:05.030338    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0816 10:02:05.038588    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0816 10:02:05.046898    3758 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0816 10:02:05.046932    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0816 10:02:05.055211    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 10:02:05.063533    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0816 10:02:05.073244    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 10:02:05.081895    3758 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 10:02:05.090915    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0816 10:02:05.099773    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0816 10:02:05.108719    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0816 10:02:05.117772    3758 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 10:02:05.125574    3758 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 10:02:05.145403    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:05.261568    3758 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0816 10:02:05.278920    3758 start.go:495] detecting cgroup driver to use...
	I0816 10:02:05.278996    3758 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0816 10:02:05.293925    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 10:02:05.306704    3758 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 10:02:05.320000    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 10:02:05.330230    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 10:02:05.340255    3758 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0816 10:02:05.369098    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 10:02:05.379374    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 10:02:05.395120    3758 ssh_runner.go:195] Run: which cri-dockerd
	I0816 10:02:05.397921    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0816 10:02:05.404866    3758 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0816 10:02:05.417935    3758 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0816 10:02:05.514793    3758 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0816 10:02:05.626208    3758 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0816 10:02:05.626292    3758 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0816 10:02:05.640317    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:05.737978    3758 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0816 10:02:08.105430    3758 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.367458496s)
	I0816 10:02:08.105491    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0816 10:02:08.115970    3758 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0816 10:02:08.129325    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0816 10:02:08.140312    3758 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0816 10:02:08.237628    3758 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0816 10:02:08.340285    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:08.436168    3758 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0816 10:02:08.457808    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0816 10:02:08.468314    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:08.561830    3758 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0816 10:02:08.620080    3758 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0816 10:02:08.620164    3758 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0816 10:02:08.624716    3758 start.go:563] Will wait 60s for crictl version
	I0816 10:02:08.624764    3758 ssh_runner.go:195] Run: which crictl
	I0816 10:02:08.627806    3758 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 10:02:08.660437    3758 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0816 10:02:08.660505    3758 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0816 10:02:08.678073    3758 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0816 10:02:08.722165    3758 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0816 10:02:08.722217    3758 main.go:141] libmachine: (ha-286000) Calling .GetIP
	I0816 10:02:08.722605    3758 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0816 10:02:08.727140    3758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 10:02:08.736769    3758 kubeadm.go:883] updating cluster {Name:ha-286000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:ha-286000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 10:02:08.736830    3758 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 10:02:08.736887    3758 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0816 10:02:08.748223    3758 docker.go:685] Got preloaded images: 
	I0816 10:02:08.748236    3758 docker.go:691] registry.k8s.io/kube-apiserver:v1.31.0 wasn't preloaded
	I0816 10:02:08.748294    3758 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0816 10:02:08.755601    3758 ssh_runner.go:195] Run: which lz4
	I0816 10:02:08.758510    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0816 10:02:08.758630    3758 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 10:02:08.761594    3758 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 10:02:08.761610    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (342554258 bytes)
	I0816 10:02:09.771496    3758 docker.go:649] duration metric: took 1.012922149s to copy over tarball
	I0816 10:02:09.771559    3758 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 10:02:12.050040    3758 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.27849665s)
	I0816 10:02:12.050054    3758 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 10:02:12.075438    3758 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0816 10:02:12.083331    3758 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0816 10:02:12.097261    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:12.204348    3758 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0816 10:02:14.516272    3758 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.311936705s)
	I0816 10:02:14.516363    3758 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0816 10:02:14.529500    3758 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0816 10:02:14.529519    3758 cache_images.go:84] Images are preloaded, skipping loading
	I0816 10:02:14.529530    3758 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.0 docker true true} ...
	I0816 10:02:14.529607    3758 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-286000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-286000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 10:02:14.529674    3758 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0816 10:02:14.568093    3758 cni.go:84] Creating CNI manager for ""
	I0816 10:02:14.568107    3758 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0816 10:02:14.568120    3758 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 10:02:14.568134    3758 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-286000 NodeName:ha-286000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 10:02:14.568222    3758 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-286000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 10:02:14.568237    3758 kube-vip.go:115] generating kube-vip config ...
	I0816 10:02:14.568286    3758 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0816 10:02:14.580706    3758 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0816 10:02:14.580778    3758 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0816 10:02:14.580832    3758 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 10:02:14.588124    3758 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 10:02:14.588174    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0816 10:02:14.595283    3758 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0816 10:02:14.608721    3758 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 10:02:14.622036    3758 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0816 10:02:14.635525    3758 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1446 bytes)
	I0816 10:02:14.648868    3758 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0816 10:02:14.651748    3758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 10:02:14.661305    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:14.752561    3758 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 10:02:14.768967    3758 certs.go:68] Setting up /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000 for IP: 192.169.0.5
	I0816 10:02:14.768980    3758 certs.go:194] generating shared ca certs ...
	I0816 10:02:14.768991    3758 certs.go:226] acquiring lock for ca certs: {Name:mka549fbc467849834d2267da204028c49d88a3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:14.769183    3758 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key
	I0816 10:02:14.769258    3758 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key
	I0816 10:02:14.769269    3758 certs.go:256] generating profile certs ...
	I0816 10:02:14.769315    3758 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.key
	I0816 10:02:14.769328    3758 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.crt with IP's: []
	I0816 10:02:14.849573    3758 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.crt ...
	I0816 10:02:14.849589    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.crt: {Name:mk9c6a2b5871e8d33f12e0a58268b028ea37ab4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:14.849911    3758 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.key ...
	I0816 10:02:14.849919    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.key: {Name:mk7d8ed264e583d7ced6b16b60c72c11b15a1f22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:14.850137    3758 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.af99fd6a
	I0816 10:02:14.850151    3758 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.af99fd6a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.254]
	I0816 10:02:14.968129    3758 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.af99fd6a ...
	I0816 10:02:14.968143    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.af99fd6a: {Name:mk3b8945a16852dca8274ec1ab3a9f2eade80831 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:14.968464    3758 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.af99fd6a ...
	I0816 10:02:14.968473    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.af99fd6a: {Name:mk90fcce3348a214c4733fe5a1fb806faff7773f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:14.968717    3758 certs.go:381] copying /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.af99fd6a -> /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt
	I0816 10:02:14.968940    3758 certs.go:385] copying /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.af99fd6a -> /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key
	I0816 10:02:14.969171    3758 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key
	I0816 10:02:14.969185    3758 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.crt with IP's: []
	I0816 10:02:15.029329    3758 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.crt ...
	I0816 10:02:15.029338    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.crt: {Name:mk70aaa3903fb58df2124d779dbea07aa95769f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:15.029604    3758 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key ...
	I0816 10:02:15.029611    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key: {Name:mka3a85354927e8da31f9dfc67f92dea3c77ca65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:15.029850    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0816 10:02:15.029876    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0816 10:02:15.029898    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0816 10:02:15.029940    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0816 10:02:15.029958    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0816 10:02:15.029990    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0816 10:02:15.030008    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0816 10:02:15.030025    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0816 10:02:15.030118    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem (1338 bytes)
	W0816 10:02:15.030163    3758 certs.go:480] ignoring /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831_empty.pem, impossibly tiny 0 bytes
	I0816 10:02:15.030171    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 10:02:15.030240    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem (1078 bytes)
	I0816 10:02:15.030268    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem (1123 bytes)
	I0816 10:02:15.030354    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem (1679 bytes)
	I0816 10:02:15.030467    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem (1708 bytes)
	I0816 10:02:15.030499    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:02:15.030525    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem -> /usr/share/ca-certificates/1831.pem
	I0816 10:02:15.030542    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> /usr/share/ca-certificates/18312.pem
	I0816 10:02:15.031030    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 10:02:15.051618    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0816 10:02:15.071318    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 10:02:15.091017    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 10:02:15.110497    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0816 10:02:15.131033    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 10:02:15.150747    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 10:02:15.170186    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 10:02:15.189808    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 10:02:15.209868    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem --> /usr/share/ca-certificates/1831.pem (1338 bytes)
	I0816 10:02:15.229378    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem --> /usr/share/ca-certificates/18312.pem (1708 bytes)
	I0816 10:02:15.248667    3758 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 10:02:15.262035    3758 ssh_runner.go:195] Run: openssl version
	I0816 10:02:15.266135    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 10:02:15.275959    3758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:02:15.279367    3758 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 16:48 /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:02:15.279403    3758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:02:15.283611    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 10:02:15.291872    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1831.pem && ln -fs /usr/share/ca-certificates/1831.pem /etc/ssl/certs/1831.pem"
	I0816 10:02:15.299890    3758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1831.pem
	I0816 10:02:15.303179    3758 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 16:57 /usr/share/ca-certificates/1831.pem
	I0816 10:02:15.303212    3758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1831.pem
	I0816 10:02:15.307423    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1831.pem /etc/ssl/certs/51391683.0"
	I0816 10:02:15.315741    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18312.pem && ln -fs /usr/share/ca-certificates/18312.pem /etc/ssl/certs/18312.pem"
	I0816 10:02:15.324088    3758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18312.pem
	I0816 10:02:15.327441    3758 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 16:57 /usr/share/ca-certificates/18312.pem
	I0816 10:02:15.327479    3758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18312.pem
	I0816 10:02:15.331723    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18312.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 10:02:15.339921    3758 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 10:02:15.343061    3758 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0816 10:02:15.343109    3758 kubeadm.go:392] StartCluster: {Name:ha-286000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:ha-286000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 10:02:15.343194    3758 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0816 10:02:15.356016    3758 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 10:02:15.363405    3758 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 10:02:15.370635    3758 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 10:02:15.377806    3758 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 10:02:15.377819    3758 kubeadm.go:157] found existing configuration files:
	
	I0816 10:02:15.377855    3758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 10:02:15.384868    3758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 10:02:15.384903    3758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 10:02:15.392001    3758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 10:02:15.398935    3758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 10:02:15.398971    3758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 10:02:15.406181    3758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 10:02:15.413108    3758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 10:02:15.413142    3758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 10:02:15.422603    3758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 10:02:15.435692    3758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 10:02:15.435749    3758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 10:02:15.450920    3758 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 10:02:15.521417    3758 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0816 10:02:15.521501    3758 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 10:02:15.590843    3758 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 10:02:15.590923    3758 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 10:02:15.591008    3758 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0816 10:02:15.600874    3758 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 10:02:15.632377    3758 out.go:235]   - Generating certificates and keys ...
	I0816 10:02:15.632427    3758 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 10:02:15.632475    3758 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 10:02:15.791885    3758 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0816 10:02:15.852803    3758 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0816 10:02:15.961982    3758 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0816 10:02:16.146557    3758 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0816 10:02:16.493401    3758 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0816 10:02:16.493556    3758 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-286000 localhost] and IPs [192.169.0.5 127.0.0.1 ::1]
	I0816 10:02:16.704832    3758 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0816 10:02:16.705108    3758 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-286000 localhost] and IPs [192.169.0.5 127.0.0.1 ::1]
	I0816 10:02:16.922818    3758 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0816 10:02:17.082810    3758 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0816 10:02:17.393939    3758 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0816 10:02:17.394070    3758 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 10:02:17.465159    3758 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 10:02:17.731984    3758 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0816 10:02:18.019833    3758 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 10:02:18.164168    3758 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 10:02:18.273756    3758 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 10:02:18.274215    3758 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 10:02:18.275949    3758 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 10:02:18.300490    3758 out.go:235]   - Booting up control plane ...
	I0816 10:02:18.300569    3758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 10:02:18.300638    3758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 10:02:18.300693    3758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 10:02:18.300780    3758 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 10:02:18.300850    3758 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 10:02:18.300879    3758 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 10:02:18.405245    3758 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0816 10:02:18.405330    3758 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0816 10:02:18.909805    3758 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 504.760374ms
	I0816 10:02:18.909928    3758 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0816 10:02:24.844637    3758 kubeadm.go:310] [api-check] The API server is healthy after 5.938882642s
	I0816 10:02:24.854864    3758 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0816 10:02:24.862134    3758 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0816 10:02:24.890940    3758 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0816 10:02:24.891101    3758 kubeadm.go:310] [mark-control-plane] Marking the node ha-286000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0816 10:02:24.901441    3758 kubeadm.go:310] [bootstrap-token] Using token: 73merd.1elxantqs1p5mkiz
	I0816 10:02:24.939545    3758 out.go:235]   - Configuring RBAC rules ...
	I0816 10:02:24.939737    3758 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0816 10:02:24.944670    3758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0816 10:02:24.951393    3758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0816 10:02:24.953383    3758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0816 10:02:24.955899    3758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0816 10:02:24.958387    3758 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0816 10:02:25.251799    3758 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0816 10:02:25.676108    3758 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0816 10:02:26.249233    3758 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0816 10:02:26.249936    3758 kubeadm.go:310] 
	I0816 10:02:26.250001    3758 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0816 10:02:26.250014    3758 kubeadm.go:310] 
	I0816 10:02:26.250090    3758 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0816 10:02:26.250102    3758 kubeadm.go:310] 
	I0816 10:02:26.250122    3758 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0816 10:02:26.250175    3758 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0816 10:02:26.250221    3758 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0816 10:02:26.250229    3758 kubeadm.go:310] 
	I0816 10:02:26.250268    3758 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0816 10:02:26.250272    3758 kubeadm.go:310] 
	I0816 10:02:26.250305    3758 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0816 10:02:26.250313    3758 kubeadm.go:310] 
	I0816 10:02:26.250359    3758 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0816 10:02:26.250425    3758 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0816 10:02:26.250484    3758 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0816 10:02:26.250491    3758 kubeadm.go:310] 
	I0816 10:02:26.250569    3758 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0816 10:02:26.250648    3758 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0816 10:02:26.250656    3758 kubeadm.go:310] 
	I0816 10:02:26.250716    3758 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 73merd.1elxantqs1p5mkiz \
	I0816 10:02:26.250793    3758 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:995590413ea712c4f7db2cf849d28264ebc291e6cd53d741ce1d22c1daaa1602 \
	I0816 10:02:26.250811    3758 kubeadm.go:310] 	--control-plane 
	I0816 10:02:26.250817    3758 kubeadm.go:310] 
	I0816 10:02:26.250879    3758 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0816 10:02:26.250885    3758 kubeadm.go:310] 
	I0816 10:02:26.250947    3758 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 73merd.1elxantqs1p5mkiz \
	I0816 10:02:26.251036    3758 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:995590413ea712c4f7db2cf849d28264ebc291e6cd53d741ce1d22c1daaa1602 
	I0816 10:02:26.251297    3758 kubeadm.go:310] W0816 17:02:15.599099    1566 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 10:02:26.251521    3758 kubeadm.go:310] W0816 17:02:15.600578    1566 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 10:02:26.251608    3758 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 10:02:26.251620    3758 cni.go:84] Creating CNI manager for ""
	I0816 10:02:26.251624    3758 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0816 10:02:26.289324    3758 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0816 10:02:26.363318    3758 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0816 10:02:26.368971    3758 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0816 10:02:26.368983    3758 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0816 10:02:26.382855    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0816 10:02:26.629869    3758 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 10:02:26.629947    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-286000 minikube.k8s.io/updated_at=2024_08_16T10_02_26_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd minikube.k8s.io/name=ha-286000 minikube.k8s.io/primary=true
	I0816 10:02:26.629949    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 10:02:26.658712    3758 ops.go:34] apiserver oom_adj: -16
	I0816 10:02:26.756402    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 10:02:27.257667    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 10:02:27.757259    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 10:02:28.256864    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 10:02:28.757602    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 10:02:29.256802    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 10:02:29.757282    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 10:02:29.882342    3758 kubeadm.go:1113] duration metric: took 3.252511045s to wait for elevateKubeSystemPrivileges
	I0816 10:02:29.882365    3758 kubeadm.go:394] duration metric: took 14.539506386s to StartCluster
	I0816 10:02:29.882380    3758 settings.go:142] acquiring lock: {Name:mk60e377244eac756871b74b6bd45115654652a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:29.882471    3758 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19461-1276/kubeconfig
	I0816 10:02:29.882922    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/kubeconfig: {Name:mk7f4491c8ec5cf96c96a5a1a5dbfba4301394a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:29.883167    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0816 10:02:29.883170    3758 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:02:29.883182    3758 start.go:241] waiting for startup goroutines ...
	I0816 10:02:29.883204    3758 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 10:02:29.883238    3758 addons.go:69] Setting storage-provisioner=true in profile "ha-286000"
	I0816 10:02:29.883244    3758 addons.go:69] Setting default-storageclass=true in profile "ha-286000"
	I0816 10:02:29.883261    3758 addons.go:234] Setting addon storage-provisioner=true in "ha-286000"
	I0816 10:02:29.883278    3758 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-286000"
	I0816 10:02:29.883280    3758 host.go:66] Checking if "ha-286000" exists ...
	I0816 10:02:29.883305    3758 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:02:29.883558    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:02:29.883573    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:02:29.883575    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:02:29.883598    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:02:29.892969    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51028
	I0816 10:02:29.893238    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51030
	I0816 10:02:29.893356    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:02:29.893605    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:02:29.893730    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:02:29.893741    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:02:29.893938    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:02:29.893951    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:02:29.893976    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:02:29.894142    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:02:29.894254    3758 main.go:141] libmachine: (ha-286000) Calling .GetState
	I0816 10:02:29.894338    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:29.894365    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:02:29.894397    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:02:29.894424    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:02:29.896690    3758 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19461-1276/kubeconfig
	I0816 10:02:29.896968    3758 kapi.go:59] client config for ha-286000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.key", CAFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xa202f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0816 10:02:29.897373    3758 cert_rotation.go:140] Starting client certificate rotation controller
	I0816 10:02:29.897530    3758 addons.go:234] Setting addon default-storageclass=true in "ha-286000"
	I0816 10:02:29.897550    3758 host.go:66] Checking if "ha-286000" exists ...
	I0816 10:02:29.897771    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:02:29.897789    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:02:29.903669    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51032
	I0816 10:02:29.904077    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:02:29.904431    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:02:29.904451    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:02:29.904736    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:02:29.904889    3758 main.go:141] libmachine: (ha-286000) Calling .GetState
	I0816 10:02:29.904993    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:29.905054    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:02:29.906086    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:29.906730    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51034
	I0816 10:02:29.907081    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:02:29.907463    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:02:29.907491    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:02:29.907694    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:02:29.908045    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:02:29.908061    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:02:29.917300    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51036
	I0816 10:02:29.917667    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:02:29.918020    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:02:29.918034    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:02:29.918277    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:02:29.918384    3758 main.go:141] libmachine: (ha-286000) Calling .GetState
	I0816 10:02:29.918483    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:29.918572    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:02:29.919534    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:29.919671    3758 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 10:02:29.919680    3758 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 10:02:29.919688    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:29.919789    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:29.919896    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:29.919990    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:29.920072    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:02:29.928735    3758 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 10:02:29.965711    3758 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 10:02:29.965725    3758 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 10:02:29.965744    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:29.965926    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:29.966043    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:29.966151    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:29.966280    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:02:30.033245    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.169.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0816 10:02:30.037035    3758 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 10:02:30.090040    3758 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 10:02:30.396267    3758 start.go:971] {"host.minikube.internal": 192.169.0.1} host record injected into CoreDNS's ConfigMap
	I0816 10:02:30.396311    3758 main.go:141] libmachine: Making call to close driver server
	I0816 10:02:30.396323    3758 main.go:141] libmachine: (ha-286000) Calling .Close
	I0816 10:02:30.396482    3758 main.go:141] libmachine: Successfully made call to close driver server
	I0816 10:02:30.396488    3758 main.go:141] libmachine: (ha-286000) DBG | Closing plugin on server side
	I0816 10:02:30.396492    3758 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 10:02:30.396501    3758 main.go:141] libmachine: Making call to close driver server
	I0816 10:02:30.396507    3758 main.go:141] libmachine: (ha-286000) Calling .Close
	I0816 10:02:30.396655    3758 main.go:141] libmachine: Successfully made call to close driver server
	I0816 10:02:30.396662    3758 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 10:02:30.396713    3758 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0816 10:02:30.396725    3758 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0816 10:02:30.396804    3758 round_trippers.go:463] GET https://192.169.0.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0816 10:02:30.396810    3758 round_trippers.go:469] Request Headers:
	I0816 10:02:30.396817    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:02:30.396822    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:02:30.402286    3758 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0816 10:02:30.402706    3758 round_trippers.go:463] PUT https://192.169.0.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0816 10:02:30.402713    3758 round_trippers.go:469] Request Headers:
	I0816 10:02:30.402718    3758 round_trippers.go:473]     Content-Type: application/json
	I0816 10:02:30.402721    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:02:30.402723    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:02:30.404290    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:02:30.404403    3758 main.go:141] libmachine: Making call to close driver server
	I0816 10:02:30.404416    3758 main.go:141] libmachine: (ha-286000) Calling .Close
	I0816 10:02:30.404565    3758 main.go:141] libmachine: Successfully made call to close driver server
	I0816 10:02:30.404575    3758 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 10:02:30.404586    3758 main.go:141] libmachine: (ha-286000) DBG | Closing plugin on server side
	I0816 10:02:30.497806    3758 main.go:141] libmachine: Making call to close driver server
	I0816 10:02:30.497818    3758 main.go:141] libmachine: (ha-286000) Calling .Close
	I0816 10:02:30.498006    3758 main.go:141] libmachine: Successfully made call to close driver server
	I0816 10:02:30.498011    3758 main.go:141] libmachine: (ha-286000) DBG | Closing plugin on server side
	I0816 10:02:30.498016    3758 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 10:02:30.498023    3758 main.go:141] libmachine: Making call to close driver server
	I0816 10:02:30.498040    3758 main.go:141] libmachine: (ha-286000) Calling .Close
	I0816 10:02:30.498160    3758 main.go:141] libmachine: Successfully made call to close driver server
	I0816 10:02:30.498168    3758 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 10:02:30.498179    3758 main.go:141] libmachine: (ha-286000) DBG | Closing plugin on server side
	I0816 10:02:30.538607    3758 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0816 10:02:30.596522    3758 addons.go:510] duration metric: took 713.336883ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0816 10:02:30.596578    3758 start.go:246] waiting for cluster config update ...
	I0816 10:02:30.596599    3758 start.go:255] writing updated cluster config ...
	I0816 10:02:30.634683    3758 out.go:201] 
	I0816 10:02:30.655754    3758 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:02:30.655861    3758 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:02:30.677634    3758 out.go:177] * Starting "ha-286000-m02" control-plane node in "ha-286000" cluster
	I0816 10:02:30.737443    3758 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 10:02:30.737475    3758 cache.go:56] Caching tarball of preloaded images
	I0816 10:02:30.737660    3758 preload.go:172] Found /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0816 10:02:30.737679    3758 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 10:02:30.737771    3758 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:02:30.738414    3758 start.go:360] acquireMachinesLock for ha-286000-m02: {Name:mk0510f0432b1e24fd07d899155e85dff8756d5a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:02:30.738494    3758 start.go:364] duration metric: took 63.585µs to acquireMachinesLock for "ha-286000-m02"
	I0816 10:02:30.738517    3758 start.go:93] Provisioning new machine with config: &{Name:ha-286000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:ha-286000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:02:30.738590    3758 start.go:125] createHost starting for "m02" (driver="hyperkit")
	I0816 10:02:30.760743    3758 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0816 10:02:30.760905    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:02:30.760935    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:02:30.771028    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51041
	I0816 10:02:30.771383    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:02:30.771745    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:02:30.771760    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:02:30.771967    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:02:30.772077    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetMachineName
	I0816 10:02:30.772157    3758 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:02:30.772261    3758 start.go:159] libmachine.API.Create for "ha-286000" (driver="hyperkit")
	I0816 10:02:30.772276    3758 client.go:168] LocalClient.Create starting
	I0816 10:02:30.772303    3758 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem
	I0816 10:02:30.772358    3758 main.go:141] libmachine: Decoding PEM data...
	I0816 10:02:30.772368    3758 main.go:141] libmachine: Parsing certificate...
	I0816 10:02:30.772413    3758 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem
	I0816 10:02:30.772450    3758 main.go:141] libmachine: Decoding PEM data...
	I0816 10:02:30.772460    3758 main.go:141] libmachine: Parsing certificate...
	I0816 10:02:30.772472    3758 main.go:141] libmachine: Running pre-create checks...
	I0816 10:02:30.772477    3758 main.go:141] libmachine: (ha-286000-m02) Calling .PreCreateCheck
	I0816 10:02:30.772545    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:30.772568    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetConfigRaw
	I0816 10:02:30.781772    3758 main.go:141] libmachine: Creating machine...
	I0816 10:02:30.781789    3758 main.go:141] libmachine: (ha-286000-m02) Calling .Create
	I0816 10:02:30.781943    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:30.782181    3758 main.go:141] libmachine: (ha-286000-m02) DBG | I0816 10:02:30.781934    3805 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19461-1276/.minikube
	I0816 10:02:30.782269    3758 main.go:141] libmachine: (ha-286000-m02) Downloading /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19461-1276/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0816 10:02:30.982082    3758 main.go:141] libmachine: (ha-286000-m02) DBG | I0816 10:02:30.982020    3805 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/id_rsa...
	I0816 10:02:31.266609    3758 main.go:141] libmachine: (ha-286000-m02) DBG | I0816 10:02:31.266514    3805 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/ha-286000-m02.rawdisk...
	I0816 10:02:31.266629    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Writing magic tar header
	I0816 10:02:31.266638    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Writing SSH key tar header
	I0816 10:02:31.267382    3758 main.go:141] libmachine: (ha-286000-m02) DBG | I0816 10:02:31.267242    3805 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02 ...
	I0816 10:02:31.642864    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:31.642889    3758 main.go:141] libmachine: (ha-286000-m02) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/hyperkit.pid
	I0816 10:02:31.642912    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Using UUID f7630301-2bc3-4935-a751-ac72999da031
	I0816 10:02:31.669349    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Generated MAC 72:69:8f:11:68:1d
	I0816 10:02:31.669369    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000
	I0816 10:02:31.669397    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"f7630301-2bc3-4935-a751-ac72999da031", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 10:02:31.669426    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"f7630301-2bc3-4935-a751-ac72999da031", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 10:02:31.669455    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "f7630301-2bc3-4935-a751-ac72999da031", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/ha-286000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/bzimage,/Users/jenkins/minikube-integration/19461-1276/.minikube/machine
s/ha-286000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000"}
	I0816 10:02:31.669484    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U f7630301-2bc3-4935-a751-ac72999da031 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/ha-286000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/bzimage,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000"
	I0816 10:02:31.669499    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0816 10:02:31.672492    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 DEBUG: hyperkit: Pid is 3806
	I0816 10:02:31.672957    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Attempt 0
	I0816 10:02:31.673000    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:31.673054    3758 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid from json: 3806
	I0816 10:02:31.674014    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Searching for 72:69:8f:11:68:1d in /var/db/dhcpd_leases ...
	I0816 10:02:31.674086    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0816 10:02:31.674098    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:02:31.674120    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:02:31.674129    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:02:31.674137    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:02:31.680025    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0816 10:02:31.688109    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0816 10:02:31.688968    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:02:31.688993    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:02:31.689006    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:02:31.689016    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:02:32.077133    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:32 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0816 10:02:32.077153    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:32 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0816 10:02:32.191789    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:02:32.191806    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:02:32.191814    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:02:32.191825    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:02:32.192676    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:32 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0816 10:02:32.192686    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:32 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0816 10:02:33.675165    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Attempt 1
	I0816 10:02:33.675183    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:33.675297    3758 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid from json: 3806
	I0816 10:02:33.676089    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Searching for 72:69:8f:11:68:1d in /var/db/dhcpd_leases ...
	I0816 10:02:33.676141    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0816 10:02:33.676153    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:02:33.676162    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:02:33.676169    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:02:33.676193    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:02:35.676266    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Attempt 2
	I0816 10:02:35.676285    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:35.676360    3758 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid from json: 3806
	I0816 10:02:35.677182    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Searching for 72:69:8f:11:68:1d in /var/db/dhcpd_leases ...
	I0816 10:02:35.677237    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0816 10:02:35.677248    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:02:35.677262    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:02:35.677270    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:02:35.677281    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:02:37.678515    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Attempt 3
	I0816 10:02:37.678532    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:37.678572    3758 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid from json: 3806
	I0816 10:02:37.679405    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Searching for 72:69:8f:11:68:1d in /var/db/dhcpd_leases ...
	I0816 10:02:37.679441    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0816 10:02:37.679451    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:02:37.679461    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:02:37.679468    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:02:37.679483    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:02:37.782496    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:37 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0816 10:02:37.782531    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:37 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0816 10:02:37.782540    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:37 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0816 10:02:37.806630    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:37 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0816 10:02:39.680161    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Attempt 4
	I0816 10:02:39.680178    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:39.680273    3758 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid from json: 3806
	I0816 10:02:39.681064    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Searching for 72:69:8f:11:68:1d in /var/db/dhcpd_leases ...
	I0816 10:02:39.681112    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0816 10:02:39.681136    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:02:39.681155    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:02:39.681170    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:02:39.681181    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:02:41.681380    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Attempt 5
	I0816 10:02:41.681414    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:41.681563    3758 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid from json: 3806
	I0816 10:02:41.682343    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Searching for 72:69:8f:11:68:1d in /var/db/dhcpd_leases ...
	I0816 10:02:41.682392    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0816 10:02:41.682406    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0d7b0}
	I0816 10:02:41.682415    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Found match: 72:69:8f:11:68:1d
	I0816 10:02:41.682421    3758 main.go:141] libmachine: (ha-286000-m02) DBG | IP: 192.169.0.6
	I0816 10:02:41.682475    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetConfigRaw
	I0816 10:02:41.683135    3758 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:02:41.683257    3758 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:02:41.683358    3758 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0816 10:02:41.683367    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetState
	I0816 10:02:41.683478    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:41.683537    3758 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid from json: 3806
	I0816 10:02:41.684414    3758 main.go:141] libmachine: Detecting operating system of created instance...
	I0816 10:02:41.684423    3758 main.go:141] libmachine: Waiting for SSH to be available...
	I0816 10:02:41.684427    3758 main.go:141] libmachine: Getting to WaitForSSH function...
	I0816 10:02:41.684431    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:41.684566    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:41.684692    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:41.684809    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:41.684943    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:41.685095    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:41.685300    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:02:41.685308    3758 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0816 10:02:42.746559    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 10:02:42.746571    3758 main.go:141] libmachine: Detecting the provisioner...
	I0816 10:02:42.746577    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:42.746714    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:42.746824    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:42.746914    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:42.746988    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:42.747112    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:42.747269    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:02:42.747277    3758 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0816 10:02:42.811630    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0816 10:02:42.811666    3758 main.go:141] libmachine: found compatible host: buildroot
	I0816 10:02:42.811672    3758 main.go:141] libmachine: Provisioning with buildroot...
	I0816 10:02:42.811681    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetMachineName
	I0816 10:02:42.811816    3758 buildroot.go:166] provisioning hostname "ha-286000-m02"
	I0816 10:02:42.811828    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetMachineName
	I0816 10:02:42.811943    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:42.812045    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:42.812136    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:42.812274    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:42.812376    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:42.812513    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:42.812673    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:02:42.812682    3758 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-286000-m02 && echo "ha-286000-m02" | sudo tee /etc/hostname
	I0816 10:02:42.887531    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-286000-m02
	
	I0816 10:02:42.887545    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:42.887676    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:42.887769    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:42.887867    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:42.887953    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:42.888075    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:42.888214    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:02:42.888225    3758 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-286000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-286000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-286000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 10:02:42.959625    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 10:02:42.959641    3758 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19461-1276/.minikube CaCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19461-1276/.minikube}
	I0816 10:02:42.959649    3758 buildroot.go:174] setting up certificates
	I0816 10:02:42.959661    3758 provision.go:84] configureAuth start
	I0816 10:02:42.959666    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetMachineName
	I0816 10:02:42.959803    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetIP
	I0816 10:02:42.959899    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:42.959980    3758 provision.go:143] copyHostCerts
	I0816 10:02:42.960007    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem
	I0816 10:02:42.960055    3758 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem, removing ...
	I0816 10:02:42.960061    3758 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem
	I0816 10:02:42.960954    3758 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem (1078 bytes)
	I0816 10:02:42.961160    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem
	I0816 10:02:42.961190    3758 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem, removing ...
	I0816 10:02:42.961195    3758 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem
	I0816 10:02:42.961273    3758 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem (1123 bytes)
	I0816 10:02:42.961439    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem
	I0816 10:02:42.961509    3758 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem, removing ...
	I0816 10:02:42.961515    3758 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem
	I0816 10:02:42.961594    3758 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem (1679 bytes)
	I0816 10:02:42.961746    3758 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem org=jenkins.ha-286000-m02 san=[127.0.0.1 192.169.0.6 ha-286000-m02 localhost minikube]
	I0816 10:02:43.093511    3758 provision.go:177] copyRemoteCerts
	I0816 10:02:43.093556    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 10:02:43.093572    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:43.093719    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:43.093818    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:43.093917    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:43.094005    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/id_rsa Username:docker}
	I0816 10:02:43.132229    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0816 10:02:43.132299    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0816 10:02:43.151814    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0816 10:02:43.151873    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0816 10:02:43.171464    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0816 10:02:43.171532    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 10:02:43.191045    3758 provision.go:87] duration metric: took 231.381757ms to configureAuth
	I0816 10:02:43.191057    3758 buildroot.go:189] setting minikube options for container-runtime
	I0816 10:02:43.191187    3758 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:02:43.191200    3758 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:02:43.191321    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:43.191410    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:43.191497    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:43.191580    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:43.191658    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:43.191759    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:43.191891    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:02:43.191898    3758 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0816 10:02:43.256733    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0816 10:02:43.256744    3758 buildroot.go:70] root file system type: tmpfs
	I0816 10:02:43.256823    3758 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0816 10:02:43.256835    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:43.256961    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:43.257048    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:43.257142    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:43.257220    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:43.257364    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:43.257505    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:02:43.257553    3758 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0816 10:02:43.330709    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0816 10:02:43.330729    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:43.330874    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:43.330977    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:43.331073    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:43.331167    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:43.331293    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:43.331440    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:02:43.331451    3758 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0816 10:02:44.884634    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0816 10:02:44.884648    3758 main.go:141] libmachine: Checking connection to Docker...
	I0816 10:02:44.884655    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetURL
	I0816 10:02:44.884793    3758 main.go:141] libmachine: Docker is up and running!
	I0816 10:02:44.884799    3758 main.go:141] libmachine: Reticulating splines...
	I0816 10:02:44.884804    3758 client.go:171] duration metric: took 14.112778669s to LocalClient.Create
	I0816 10:02:44.884820    3758 start.go:167] duration metric: took 14.112810851s to libmachine.API.Create "ha-286000"
	I0816 10:02:44.884826    3758 start.go:293] postStartSetup for "ha-286000-m02" (driver="hyperkit")
	I0816 10:02:44.884832    3758 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 10:02:44.884843    3758 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:02:44.884991    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 10:02:44.885004    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:44.885101    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:44.885204    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:44.885335    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:44.885428    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/id_rsa Username:docker}
	I0816 10:02:44.925701    3758 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 10:02:44.929599    3758 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 10:02:44.929613    3758 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19461-1276/.minikube/addons for local assets ...
	I0816 10:02:44.929722    3758 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19461-1276/.minikube/files for local assets ...
	I0816 10:02:44.929908    3758 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> 18312.pem in /etc/ssl/certs
	I0816 10:02:44.929914    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> /etc/ssl/certs/18312.pem
	I0816 10:02:44.930119    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 10:02:44.940484    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem --> /etc/ssl/certs/18312.pem (1708 bytes)
	I0816 10:02:44.973200    3758 start.go:296] duration metric: took 88.36731ms for postStartSetup
	I0816 10:02:44.973230    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetConfigRaw
	I0816 10:02:44.973848    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetIP
	I0816 10:02:44.973996    3758 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:02:44.974351    3758 start.go:128] duration metric: took 14.23599979s to createHost
	I0816 10:02:44.974366    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:44.974448    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:44.974568    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:44.974660    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:44.974744    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:44.974844    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:44.974966    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:02:44.974973    3758 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 10:02:45.036267    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723827764.932365297
	
	I0816 10:02:45.036285    3758 fix.go:216] guest clock: 1723827764.932365297
	I0816 10:02:45.036291    3758 fix.go:229] Guest: 2024-08-16 10:02:44.932365297 -0700 PDT Remote: 2024-08-16 10:02:44.97436 -0700 PDT m=+56.899083401 (delta=-41.994703ms)
	I0816 10:02:45.036301    3758 fix.go:200] guest clock delta is within tolerance: -41.994703ms
	I0816 10:02:45.036306    3758 start.go:83] releasing machines lock for "ha-286000-m02", held for 14.298063452s
	I0816 10:02:45.036338    3758 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:02:45.036468    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetIP
	I0816 10:02:45.062334    3758 out.go:177] * Found network options:
	I0816 10:02:45.105996    3758 out.go:177]   - NO_PROXY=192.169.0.5
	W0816 10:02:45.128174    3758 proxy.go:119] fail to check proxy env: Error ip not in block
	I0816 10:02:45.128216    3758 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:02:45.128829    3758 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:02:45.128969    3758 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:02:45.129060    3758 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 10:02:45.129088    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	W0816 10:02:45.129115    3758 proxy.go:119] fail to check proxy env: Error ip not in block
	I0816 10:02:45.129222    3758 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0816 10:02:45.129232    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:45.129238    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:45.129370    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:45.129393    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:45.129500    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:45.129515    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:45.129631    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/id_rsa Username:docker}
	I0816 10:02:45.129647    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:45.129727    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/id_rsa Username:docker}
	W0816 10:02:45.164265    3758 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 10:02:45.164324    3758 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 10:02:45.211468    3758 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 10:02:45.211489    3758 start.go:495] detecting cgroup driver to use...
	I0816 10:02:45.211595    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 10:02:45.228144    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0816 10:02:45.237310    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0816 10:02:45.246272    3758 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0816 10:02:45.246316    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0816 10:02:45.255987    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 10:02:45.265400    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0816 10:02:45.274661    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 10:02:45.283628    3758 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 10:02:45.292648    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0816 10:02:45.302207    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0816 10:02:45.311314    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0816 10:02:45.320414    3758 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 10:02:45.328532    3758 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 10:02:45.337229    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:45.444954    3758 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0816 10:02:45.464569    3758 start.go:495] detecting cgroup driver to use...
	I0816 10:02:45.464652    3758 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0816 10:02:45.488292    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 10:02:45.500162    3758 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 10:02:45.561835    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 10:02:45.572027    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 10:02:45.582122    3758 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0816 10:02:45.603011    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 10:02:45.613488    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 10:02:45.628434    3758 ssh_runner.go:195] Run: which cri-dockerd
	I0816 10:02:45.631358    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0816 10:02:45.638496    3758 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0816 10:02:45.652676    3758 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0816 10:02:45.755971    3758 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0816 10:02:45.860533    3758 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0816 10:02:45.860559    3758 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0816 10:02:45.874570    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:45.976095    3758 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0816 10:02:48.459358    3758 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.483288325s)
	I0816 10:02:48.459417    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0816 10:02:48.469882    3758 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0816 10:02:48.484429    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0816 10:02:48.495038    3758 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0816 10:02:48.597129    3758 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0816 10:02:48.691412    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:48.797592    3758 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0816 10:02:48.812075    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0816 10:02:48.823307    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:48.918652    3758 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0816 10:02:48.980191    3758 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0816 10:02:48.980257    3758 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0816 10:02:48.984954    3758 start.go:563] Will wait 60s for crictl version
	I0816 10:02:48.985011    3758 ssh_runner.go:195] Run: which crictl
	I0816 10:02:48.988185    3758 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 10:02:49.015262    3758 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0816 10:02:49.015344    3758 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0816 10:02:49.032341    3758 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0816 10:02:49.078128    3758 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0816 10:02:49.137645    3758 out.go:177]   - env NO_PROXY=192.169.0.5
	I0816 10:02:49.176661    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetIP
	I0816 10:02:49.177129    3758 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0816 10:02:49.181778    3758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 10:02:49.191522    3758 mustload.go:65] Loading cluster: ha-286000
	I0816 10:02:49.191665    3758 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:02:49.191885    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:02:49.191909    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:02:49.200721    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51064
	I0816 10:02:49.201069    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:02:49.201413    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:02:49.201424    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:02:49.201648    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:02:49.201776    3758 main.go:141] libmachine: (ha-286000) Calling .GetState
	I0816 10:02:49.201862    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:49.201960    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:02:49.202920    3758 host.go:66] Checking if "ha-286000" exists ...
	I0816 10:02:49.203188    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:02:49.203205    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:02:49.211926    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51066
	I0816 10:02:49.212258    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:02:49.212635    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:02:49.212652    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:02:49.212860    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:02:49.212967    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:49.213056    3758 certs.go:68] Setting up /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000 for IP: 192.169.0.6
	I0816 10:02:49.213061    3758 certs.go:194] generating shared ca certs ...
	I0816 10:02:49.213071    3758 certs.go:226] acquiring lock for ca certs: {Name:mka549fbc467849834d2267da204028c49d88a3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:49.213229    3758 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key
	I0816 10:02:49.213305    3758 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key
	I0816 10:02:49.213314    3758 certs.go:256] generating profile certs ...
	I0816 10:02:49.213406    3758 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.key
	I0816 10:02:49.213429    3758 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.5eb44438
	I0816 10:02:49.213443    3758 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.5eb44438 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.254]
	I0816 10:02:49.263058    3758 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.5eb44438 ...
	I0816 10:02:49.263077    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.5eb44438: {Name:mk266d5e842df442c104f85113e06d171d5a89ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:49.263395    3758 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.5eb44438 ...
	I0816 10:02:49.263404    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.5eb44438: {Name:mk1428912a4c1edaafe55f9bff302c19ad337785 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:49.263623    3758 certs.go:381] copying /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.5eb44438 -> /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt
	I0816 10:02:49.263846    3758 certs.go:385] copying /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.5eb44438 -> /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key
	I0816 10:02:49.264093    3758 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key
	I0816 10:02:49.264103    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0816 10:02:49.264125    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0816 10:02:49.264143    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0816 10:02:49.264163    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0816 10:02:49.264180    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0816 10:02:49.264199    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0816 10:02:49.264223    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0816 10:02:49.264242    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0816 10:02:49.264331    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem (1338 bytes)
	W0816 10:02:49.264378    3758 certs.go:480] ignoring /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831_empty.pem, impossibly tiny 0 bytes
	I0816 10:02:49.264386    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 10:02:49.264418    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem (1078 bytes)
	I0816 10:02:49.264449    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem (1123 bytes)
	I0816 10:02:49.264477    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem (1679 bytes)
	I0816 10:02:49.264545    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem (1708 bytes)
	I0816 10:02:49.264593    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> /usr/share/ca-certificates/18312.pem
	I0816 10:02:49.264614    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:02:49.264634    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem -> /usr/share/ca-certificates/1831.pem
	I0816 10:02:49.264663    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:49.264814    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:49.264897    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:49.265014    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:49.265108    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:02:49.296192    3758 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0816 10:02:49.299577    3758 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0816 10:02:49.312765    3758 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0816 10:02:49.316551    3758 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0816 10:02:49.332167    3758 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0816 10:02:49.336199    3758 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0816 10:02:49.349166    3758 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0816 10:02:49.354242    3758 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0816 10:02:49.364119    3758 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0816 10:02:49.367349    3758 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0816 10:02:49.375423    3758 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0816 10:02:49.378717    3758 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0816 10:02:49.386918    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 10:02:49.408036    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0816 10:02:49.428163    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 10:02:49.448952    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 10:02:49.468986    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0816 10:02:49.489674    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0816 10:02:49.509597    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 10:02:49.530175    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 10:02:49.550578    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem --> /usr/share/ca-certificates/18312.pem (1708 bytes)
	I0816 10:02:49.571266    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 10:02:49.590995    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem --> /usr/share/ca-certificates/1831.pem (1338 bytes)
	I0816 10:02:49.611766    3758 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0816 10:02:49.625222    3758 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0816 10:02:49.638852    3758 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0816 10:02:49.653497    3758 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0816 10:02:49.667098    3758 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0816 10:02:49.680935    3758 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0816 10:02:49.695437    3758 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0816 10:02:49.708684    3758 ssh_runner.go:195] Run: openssl version
	I0816 10:02:49.712979    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18312.pem && ln -fs /usr/share/ca-certificates/18312.pem /etc/ssl/certs/18312.pem"
	I0816 10:02:49.721565    3758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18312.pem
	I0816 10:02:49.725513    3758 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 16:57 /usr/share/ca-certificates/18312.pem
	I0816 10:02:49.725580    3758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18312.pem
	I0816 10:02:49.730364    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18312.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 10:02:49.739184    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 10:02:49.747494    3758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:02:49.750923    3758 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 16:48 /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:02:49.750955    3758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:02:49.755195    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 10:02:49.763703    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1831.pem && ln -fs /usr/share/ca-certificates/1831.pem /etc/ssl/certs/1831.pem"
	I0816 10:02:49.772914    3758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1831.pem
	I0816 10:02:49.776377    3758 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 16:57 /usr/share/ca-certificates/1831.pem
	I0816 10:02:49.776421    3758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1831.pem
	I0816 10:02:49.780731    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1831.pem /etc/ssl/certs/51391683.0"
	I0816 10:02:49.789078    3758 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 10:02:49.792242    3758 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0816 10:02:49.792277    3758 kubeadm.go:934] updating node {m02 192.169.0.6 8443 v1.31.0 docker true true} ...
	I0816 10:02:49.792332    3758 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-286000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-286000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 10:02:49.792349    3758 kube-vip.go:115] generating kube-vip config ...
	I0816 10:02:49.792381    3758 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0816 10:02:49.804469    3758 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0816 10:02:49.804511    3758 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0816 10:02:49.804567    3758 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 10:02:49.812921    3758 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0816 10:02:49.812983    3758 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0816 10:02:49.821031    3758 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256 -> /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubelet
	I0816 10:02:49.821035    3758 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256 -> /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubeadm
	I0816 10:02:49.821039    3758 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubectl
	I0816 10:02:51.911279    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0816 10:02:51.911374    3758 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0816 10:02:51.915115    3758 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0816 10:02:51.915150    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0816 10:02:52.537289    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0816 10:02:52.537383    3758 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0816 10:02:52.540898    3758 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0816 10:02:52.540929    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0816 10:02:53.267467    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 10:02:53.278589    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0816 10:02:53.278731    3758 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0816 10:02:53.282055    3758 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0816 10:02:53.282073    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0816 10:02:53.498714    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0816 10:02:53.506112    3758 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0816 10:02:53.519672    3758 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 10:02:53.533551    3758 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0816 10:02:53.548024    3758 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0816 10:02:53.551124    3758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 10:02:53.560470    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:53.659270    3758 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 10:02:53.674625    3758 host.go:66] Checking if "ha-286000" exists ...
	I0816 10:02:53.674900    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:02:53.674923    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:02:53.683891    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51093
	I0816 10:02:53.684256    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:02:53.684641    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:02:53.684658    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:02:53.684885    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:02:53.685003    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:53.685084    3758 start.go:317] joinCluster: &{Name:ha-286000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clu
sterName:ha-286000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpira
tion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 10:02:53.685155    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0816 10:02:53.685167    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:53.685248    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:53.685339    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:53.685423    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:53.685503    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:02:53.765911    3758 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:02:53.765972    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token mpmdnf.6rzc0wpb01e0t6lz --discovery-token-ca-cert-hash sha256:995590413ea712c4f7db2cf849d28264ebc291e6cd53d741ce1d22c1daaa1602 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-286000-m02 --control-plane --apiserver-advertise-address=192.169.0.6 --apiserver-bind-port=8443"
	I0816 10:03:22.070391    3758 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token mpmdnf.6rzc0wpb01e0t6lz --discovery-token-ca-cert-hash sha256:995590413ea712c4f7db2cf849d28264ebc291e6cd53d741ce1d22c1daaa1602 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-286000-m02 --control-plane --apiserver-advertise-address=192.169.0.6 --apiserver-bind-port=8443": (28.304928658s)
	I0816 10:03:22.070414    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0816 10:03:22.427732    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-286000-m02 minikube.k8s.io/updated_at=2024_08_16T10_03_22_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd minikube.k8s.io/name=ha-286000 minikube.k8s.io/primary=false
	I0816 10:03:22.531211    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-286000-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0816 10:03:22.629050    3758 start.go:319] duration metric: took 28.944509117s to joinCluster
	I0816 10:03:22.629105    3758 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:03:22.629360    3758 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:03:22.652598    3758 out.go:177] * Verifying Kubernetes components...
	I0816 10:03:22.726472    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:03:22.952731    3758 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 10:03:22.975401    3758 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19461-1276/kubeconfig
	I0816 10:03:22.975621    3758 kapi.go:59] client config for ha-286000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.key", CAFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xa202f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0816 10:03:22.975663    3758 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0816 10:03:22.975836    3758 node_ready.go:35] waiting up to 6m0s for node "ha-286000-m02" to be "Ready" ...
	I0816 10:03:22.975886    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:22.975891    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:22.975897    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:22.975901    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:22.983180    3758 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0816 10:03:23.476314    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:23.476365    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:23.476380    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:23.476394    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:23.502874    3758 round_trippers.go:574] Response Status: 200 OK in 26 milliseconds
	I0816 10:03:23.977290    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:23.977304    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:23.977311    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:23.977315    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:23.979495    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:24.477835    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:24.477851    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:24.477858    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:24.477861    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:24.480701    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:24.977514    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:24.977568    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:24.977580    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:24.977588    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:24.980856    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:24.981299    3758 node_ready.go:53] node "ha-286000-m02" has status "Ready":"False"
	I0816 10:03:25.476235    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:25.476252    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:25.476260    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:25.476277    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:25.478567    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:25.976418    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:25.976469    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:25.976477    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:25.976480    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:25.978730    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:26.476423    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:26.476439    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:26.476445    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:26.476448    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:26.478424    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:26.975919    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:26.975934    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:26.975940    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:26.975945    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:26.977809    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:27.475966    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:27.475981    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:27.475987    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:27.475990    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:27.477769    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:27.478121    3758 node_ready.go:53] node "ha-286000-m02" has status "Ready":"False"
	I0816 10:03:27.977123    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:27.977139    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:27.977146    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:27.977150    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:27.979266    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:28.477140    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:28.477226    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:28.477254    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:28.477264    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:28.480386    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:28.977368    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:28.977407    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:28.977420    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:28.977427    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:28.980479    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:29.477521    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:29.477545    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:29.477556    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:29.477565    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:29.481179    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:29.481510    3758 node_ready.go:53] node "ha-286000-m02" has status "Ready":"False"
	I0816 10:03:29.975906    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:29.975932    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:29.975943    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:29.975949    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:29.978633    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:30.477171    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:30.477189    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:30.477198    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:30.477201    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:30.480005    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:30.977947    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:30.977973    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:30.977993    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:30.977998    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:30.982219    3758 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 10:03:31.476252    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:31.476327    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:31.476341    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:31.476347    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:31.479417    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:31.975987    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:31.976012    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:31.976023    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:31.976029    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:31.979409    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:31.979880    3758 node_ready.go:53] node "ha-286000-m02" has status "Ready":"False"
	I0816 10:03:32.477180    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:32.477207    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:32.477229    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:32.477240    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:32.480686    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:32.976225    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:32.976250    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:32.976261    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:32.976269    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:32.979533    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:33.475956    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:33.475980    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:33.475991    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:33.475997    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:33.479025    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:33.977111    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:33.977185    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:33.977198    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:33.977204    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:33.980426    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:33.980922    3758 node_ready.go:53] node "ha-286000-m02" has status "Ready":"False"
	I0816 10:03:34.476455    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:34.476470    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:34.476477    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:34.476481    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:34.478517    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:34.976996    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:34.977037    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:34.977049    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:34.977084    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:34.980500    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:35.476045    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:35.476068    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:35.476079    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:35.476086    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:35.479058    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:35.975920    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:35.975944    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:35.975956    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:35.975962    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:35.979071    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:36.477845    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:36.477872    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:36.477885    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:36.477946    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:36.481401    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:36.481890    3758 node_ready.go:53] node "ha-286000-m02" has status "Ready":"False"
	I0816 10:03:36.977033    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:36.977057    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:36.977069    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:36.977076    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:36.980476    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:37.477095    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:37.477120    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:37.477133    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:37.477141    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:37.480485    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:37.976303    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:37.976320    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:37.976343    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:37.976348    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:37.978551    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:38.476492    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:38.476517    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.476528    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.476536    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:38.480190    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:38.976091    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:38.976114    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.976124    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.976129    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:38.979741    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:38.980051    3758 node_ready.go:49] node "ha-286000-m02" has status "Ready":"True"
	I0816 10:03:38.980066    3758 node_ready.go:38] duration metric: took 16.004516347s for node "ha-286000-m02" to be "Ready" ...
	I0816 10:03:38.980077    3758 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 10:03:38.980127    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0816 10:03:38.980135    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.980142    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.980152    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:38.982937    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:38.987546    3758 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-2kqjf" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:38.987591    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-2kqjf
	I0816 10:03:38.987597    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.987603    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.987607    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:38.989339    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:38.989748    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:38.989758    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.989764    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.989768    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:38.991324    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:38.991660    3758 pod_ready.go:93] pod "coredns-6f6b679f8f-2kqjf" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:38.991671    3758 pod_ready.go:82] duration metric: took 4.111381ms for pod "coredns-6f6b679f8f-2kqjf" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:38.991678    3758 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-rfbz7" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:38.991708    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-rfbz7
	I0816 10:03:38.991713    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.991718    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.991721    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:38.993245    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:38.993624    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:38.993631    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.993640    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.993644    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:38.995095    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:38.995419    3758 pod_ready.go:93] pod "coredns-6f6b679f8f-rfbz7" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:38.995427    3758 pod_ready.go:82] duration metric: took 3.744646ms for pod "coredns-6f6b679f8f-rfbz7" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:38.995433    3758 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:38.995467    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-286000
	I0816 10:03:38.995472    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.995478    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.995482    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:38.997007    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:38.997390    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:38.997397    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.997403    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.997406    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:38.998839    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:38.999151    3758 pod_ready.go:93] pod "etcd-ha-286000" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:38.999160    3758 pod_ready.go:82] duration metric: took 3.721879ms for pod "etcd-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:38.999165    3758 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:38.999202    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-286000-m02
	I0816 10:03:38.999207    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.999213    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.999217    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:39.001189    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:39.001547    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:39.001554    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:39.001559    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:39.001563    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:39.003004    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:39.003290    3758 pod_ready.go:93] pod "etcd-ha-286000-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:39.003298    3758 pod_ready.go:82] duration metric: took 4.127582ms for pod "etcd-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:39.003307    3758 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:39.177834    3758 request.go:632] Waited for 174.443582ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-286000
	I0816 10:03:39.177948    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-286000
	I0816 10:03:39.177963    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:39.177974    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:39.177981    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:39.181371    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:39.376438    3758 request.go:632] Waited for 194.538822ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:39.376562    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:39.376574    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:39.376586    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:39.376596    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:39.379989    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:39.380425    3758 pod_ready.go:93] pod "kube-apiserver-ha-286000" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:39.380437    3758 pod_ready.go:82] duration metric: took 377.131323ms for pod "kube-apiserver-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:39.380446    3758 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:39.576633    3758 request.go:632] Waited for 196.095353ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-286000-m02
	I0816 10:03:39.576687    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-286000-m02
	I0816 10:03:39.576696    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:39.576769    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:39.576793    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:39.580164    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:39.776642    3758 request.go:632] Waited for 195.66937ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:39.776725    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:39.776735    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:39.776746    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:39.776752    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:39.780273    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:39.780714    3758 pod_ready.go:93] pod "kube-apiserver-ha-286000-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:39.780723    3758 pod_ready.go:82] duration metric: took 400.274102ms for pod "kube-apiserver-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:39.780730    3758 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:39.977598    3758 request.go:632] Waited for 196.714211ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-286000
	I0816 10:03:39.977650    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-286000
	I0816 10:03:39.977659    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:39.977670    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:39.977679    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:39.981079    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:40.177460    3758 request.go:632] Waited for 195.94045ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:40.177528    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:40.177589    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:40.177603    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:40.177609    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:40.180971    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:40.181992    3758 pod_ready.go:93] pod "kube-controller-manager-ha-286000" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:40.182036    3758 pod_ready.go:82] duration metric: took 401.277819ms for pod "kube-controller-manager-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:40.182047    3758 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:40.376258    3758 request.go:632] Waited for 194.111468ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-286000-m02
	I0816 10:03:40.376327    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-286000-m02
	I0816 10:03:40.376336    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:40.376347    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:40.376355    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:40.379476    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:40.576506    3758 request.go:632] Waited for 196.409077ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:40.576552    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:40.576560    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:40.576571    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:40.576580    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:40.579964    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:40.580641    3758 pod_ready.go:93] pod "kube-controller-manager-ha-286000-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:40.580654    3758 pod_ready.go:82] duration metric: took 398.607786ms for pod "kube-controller-manager-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:40.580663    3758 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pt669" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:40.778194    3758 request.go:632] Waited for 197.469682ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pt669
	I0816 10:03:40.778324    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pt669
	I0816 10:03:40.778333    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:40.778345    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:40.778352    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:40.781925    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:40.976623    3758 request.go:632] Waited for 194.04689ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:40.976752    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:40.976761    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:40.976772    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:40.976780    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:40.980022    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:40.980473    3758 pod_ready.go:93] pod "kube-proxy-pt669" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:40.980485    3758 pod_ready.go:82] duration metric: took 399.822578ms for pod "kube-proxy-pt669" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:40.980494    3758 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-w4nt2" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:41.177361    3758 request.go:632] Waited for 196.811255ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w4nt2
	I0816 10:03:41.177533    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w4nt2
	I0816 10:03:41.177545    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:41.177556    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:41.177563    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:41.181543    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:41.377692    3758 request.go:632] Waited for 195.583238ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:41.377791    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:41.377802    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:41.377812    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:41.377819    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:41.381134    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:41.381716    3758 pod_ready.go:93] pod "kube-proxy-w4nt2" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:41.381726    3758 pod_ready.go:82] duration metric: took 401.234529ms for pod "kube-proxy-w4nt2" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:41.381733    3758 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:41.577020    3758 request.go:632] Waited for 195.246357ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-286000
	I0816 10:03:41.577073    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-286000
	I0816 10:03:41.577125    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:41.577138    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:41.577147    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:41.580051    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:41.777879    3758 request.go:632] Waited for 197.366145ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:41.777974    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:41.777983    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:41.777995    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:41.778003    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:41.781434    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:41.782012    3758 pod_ready.go:93] pod "kube-scheduler-ha-286000" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:41.782023    3758 pod_ready.go:82] duration metric: took 400.292624ms for pod "kube-scheduler-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:41.782051    3758 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:41.976356    3758 request.go:632] Waited for 194.23341ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-286000-m02
	I0816 10:03:41.976463    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-286000-m02
	I0816 10:03:41.976473    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:41.976485    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:41.976494    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:41.980088    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:42.176952    3758 request.go:632] Waited for 196.161737ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:42.177009    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:42.177018    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:42.177026    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:42.177083    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:42.182888    3758 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0816 10:03:42.183179    3758 pod_ready.go:93] pod "kube-scheduler-ha-286000-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:42.183188    3758 pod_ready.go:82] duration metric: took 401.133714ms for pod "kube-scheduler-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:42.183203    3758 pod_ready.go:39] duration metric: took 3.203173842s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 10:03:42.183221    3758 api_server.go:52] waiting for apiserver process to appear ...
	I0816 10:03:42.183274    3758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 10:03:42.195671    3758 api_server.go:72] duration metric: took 19.566910589s to wait for apiserver process to appear ...
	I0816 10:03:42.195682    3758 api_server.go:88] waiting for apiserver healthz status ...
	I0816 10:03:42.195698    3758 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0816 10:03:42.198837    3758 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0816 10:03:42.198870    3758 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0816 10:03:42.198874    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:42.198880    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:42.198884    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:42.199389    3758 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0816 10:03:42.199468    3758 api_server.go:141] control plane version: v1.31.0
	I0816 10:03:42.199478    3758 api_server.go:131] duration metric: took 3.791893ms to wait for apiserver health ...
	I0816 10:03:42.199486    3758 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 10:03:42.378048    3758 request.go:632] Waited for 178.500938ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0816 10:03:42.378156    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0816 10:03:42.378166    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:42.378178    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:42.378186    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:42.382776    3758 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 10:03:42.386068    3758 system_pods.go:59] 17 kube-system pods found
	I0816 10:03:42.386083    3758 system_pods.go:61] "coredns-6f6b679f8f-2kqjf" [be18cd09-3e3b-4749-bf29-7001a879f593] Running
	I0816 10:03:42.386087    3758 system_pods.go:61] "coredns-6f6b679f8f-rfbz7" [a593e27a-e38b-46c7-a603-44963c31c095] Running
	I0816 10:03:42.386090    3758 system_pods.go:61] "etcd-ha-286000" [c45bc623-befe-4e88-8da1-bb4f7d7be5b2] Running
	I0816 10:03:42.386093    3758 system_pods.go:61] "etcd-ha-286000-m02" [e14fbe0c-4527-4cbb-8c13-01c0b32a8d15] Running
	I0816 10:03:42.386096    3758 system_pods.go:61] "kindnet-t9kjf" [20c57f28-5e86-44a4-8a2a-07e1e240abf2] Running
	I0816 10:03:42.386099    3758 system_pods.go:61] "kindnet-whqxb" [1cc5291b-52ea-44b5-b87c-607d46b5281a] Running
	I0816 10:03:42.386102    3758 system_pods.go:61] "kube-apiserver-ha-286000" [c0d298a9-931b-4084-9e5f-01a88de27456] Running
	I0816 10:03:42.386104    3758 system_pods.go:61] "kube-apiserver-ha-286000-m02" [3666c131-2b88-483a-ab8c-b18cb8db7d6f] Running
	I0816 10:03:42.386120    3758 system_pods.go:61] "kube-controller-manager-ha-286000" [5a4f4c5d-d89a-4e5f-b022-479ca31f64cd] Running
	I0816 10:03:42.386127    3758 system_pods.go:61] "kube-controller-manager-ha-286000-m02" [a347c3d4-fa47-49ea-93a0-83087017a2bd] Running
	I0816 10:03:42.386130    3758 system_pods.go:61] "kube-proxy-pt669" [798c956f-8f08-465a-b0d9-90b65f703f80] Running
	I0816 10:03:42.386139    3758 system_pods.go:61] "kube-proxy-w4nt2" [79ce2248-f8fd-4a3b-aeb7-f81d7ad64564] Running
	I0816 10:03:42.386143    3758 system_pods.go:61] "kube-scheduler-ha-286000" [b4af80e0-cbb0-4257-a212-3585faff0875] Running
	I0816 10:03:42.386145    3758 system_pods.go:61] "kube-scheduler-ha-286000-m02" [89920e93-c959-47ec-8a2c-f2b63e8c3d3a] Running
	I0816 10:03:42.386153    3758 system_pods.go:61] "kube-vip-ha-286000" [b0f85be2-2afb-44b0-a701-161b24529349] Running
	I0816 10:03:42.386156    3758 system_pods.go:61] "kube-vip-ha-286000-m02" [4982dc53-946d-4f75-8e9e-452ef39ff2fa] Running
	I0816 10:03:42.386159    3758 system_pods.go:61] "storage-provisioner" [4805d53b-2db3-4092-a3f2-d4a854e93adc] Running
	I0816 10:03:42.386163    3758 system_pods.go:74] duration metric: took 186.675963ms to wait for pod list to return data ...
	I0816 10:03:42.386170    3758 default_sa.go:34] waiting for default service account to be created ...
	I0816 10:03:42.577030    3758 request.go:632] Waited for 190.795237ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0816 10:03:42.577183    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0816 10:03:42.577198    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:42.577209    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:42.577215    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:42.581020    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:42.581204    3758 default_sa.go:45] found service account: "default"
	I0816 10:03:42.581216    3758 default_sa.go:55] duration metric: took 195.045213ms for default service account to be created ...
	I0816 10:03:42.581224    3758 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 10:03:42.777160    3758 request.go:632] Waited for 195.848894ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0816 10:03:42.777252    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0816 10:03:42.777268    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:42.777285    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:42.777303    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:42.781637    3758 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 10:03:42.784962    3758 system_pods.go:86] 17 kube-system pods found
	I0816 10:03:42.784974    3758 system_pods.go:89] "coredns-6f6b679f8f-2kqjf" [be18cd09-3e3b-4749-bf29-7001a879f593] Running
	I0816 10:03:42.784978    3758 system_pods.go:89] "coredns-6f6b679f8f-rfbz7" [a593e27a-e38b-46c7-a603-44963c31c095] Running
	I0816 10:03:42.784981    3758 system_pods.go:89] "etcd-ha-286000" [c45bc623-befe-4e88-8da1-bb4f7d7be5b2] Running
	I0816 10:03:42.784984    3758 system_pods.go:89] "etcd-ha-286000-m02" [e14fbe0c-4527-4cbb-8c13-01c0b32a8d15] Running
	I0816 10:03:42.784987    3758 system_pods.go:89] "kindnet-t9kjf" [20c57f28-5e86-44a4-8a2a-07e1e240abf2] Running
	I0816 10:03:42.784990    3758 system_pods.go:89] "kindnet-whqxb" [1cc5291b-52ea-44b5-b87c-607d46b5281a] Running
	I0816 10:03:42.784992    3758 system_pods.go:89] "kube-apiserver-ha-286000" [c0d298a9-931b-4084-9e5f-01a88de27456] Running
	I0816 10:03:42.784995    3758 system_pods.go:89] "kube-apiserver-ha-286000-m02" [3666c131-2b88-483a-ab8c-b18cb8db7d6f] Running
	I0816 10:03:42.784997    3758 system_pods.go:89] "kube-controller-manager-ha-286000" [5a4f4c5d-d89a-4e5f-b022-479ca31f64cd] Running
	I0816 10:03:42.785001    3758 system_pods.go:89] "kube-controller-manager-ha-286000-m02" [a347c3d4-fa47-49ea-93a0-83087017a2bd] Running
	I0816 10:03:42.785003    3758 system_pods.go:89] "kube-proxy-pt669" [798c956f-8f08-465a-b0d9-90b65f703f80] Running
	I0816 10:03:42.785006    3758 system_pods.go:89] "kube-proxy-w4nt2" [79ce2248-f8fd-4a3b-aeb7-f81d7ad64564] Running
	I0816 10:03:42.785009    3758 system_pods.go:89] "kube-scheduler-ha-286000" [b4af80e0-cbb0-4257-a212-3585faff0875] Running
	I0816 10:03:42.785011    3758 system_pods.go:89] "kube-scheduler-ha-286000-m02" [89920e93-c959-47ec-8a2c-f2b63e8c3d3a] Running
	I0816 10:03:42.785015    3758 system_pods.go:89] "kube-vip-ha-286000" [b0f85be2-2afb-44b0-a701-161b24529349] Running
	I0816 10:03:42.785017    3758 system_pods.go:89] "kube-vip-ha-286000-m02" [4982dc53-946d-4f75-8e9e-452ef39ff2fa] Running
	I0816 10:03:42.785023    3758 system_pods.go:89] "storage-provisioner" [4805d53b-2db3-4092-a3f2-d4a854e93adc] Running
	I0816 10:03:42.785028    3758 system_pods.go:126] duration metric: took 203.80313ms to wait for k8s-apps to be running ...
	I0816 10:03:42.785034    3758 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 10:03:42.785088    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 10:03:42.796381    3758 system_svc.go:56] duration metric: took 11.343439ms WaitForService to wait for kubelet
	I0816 10:03:42.796397    3758 kubeadm.go:582] duration metric: took 20.16764951s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 10:03:42.796409    3758 node_conditions.go:102] verifying NodePressure condition ...
	I0816 10:03:42.977594    3758 request.go:632] Waited for 181.143005ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0816 10:03:42.977695    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0816 10:03:42.977706    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:42.977719    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:42.977724    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:42.981373    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:42.981988    3758 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 10:03:42.982013    3758 node_conditions.go:123] node cpu capacity is 2
	I0816 10:03:42.982039    3758 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 10:03:42.982043    3758 node_conditions.go:123] node cpu capacity is 2
	I0816 10:03:42.982046    3758 node_conditions.go:105] duration metric: took 185.637747ms to run NodePressure ...
	I0816 10:03:42.982055    3758 start.go:241] waiting for startup goroutines ...
	I0816 10:03:42.982079    3758 start.go:255] writing updated cluster config ...
	I0816 10:03:43.003740    3758 out.go:201] 
	I0816 10:03:43.024885    3758 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:03:43.024976    3758 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:03:43.047776    3758 out.go:177] * Starting "ha-286000-m03" control-plane node in "ha-286000" cluster
	I0816 10:03:43.137621    3758 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 10:03:43.137655    3758 cache.go:56] Caching tarball of preloaded images
	I0816 10:03:43.137861    3758 preload.go:172] Found /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0816 10:03:43.137884    3758 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 10:03:43.138022    3758 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:03:43.159475    3758 start.go:360] acquireMachinesLock for ha-286000-m03: {Name:mk0510f0432b1e24fd07d899155e85dff8756d5a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:03:43.159592    3758 start.go:364] duration metric: took 91.442µs to acquireMachinesLock for "ha-286000-m03"
	I0816 10:03:43.159625    3758 start.go:93] Provisioning new machine with config: &{Name:ha-286000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:ha-286000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ing
ress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:03:43.159736    3758 start.go:125] createHost starting for "m03" (driver="hyperkit")
	I0816 10:03:43.180569    3758 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0816 10:03:43.180719    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:03:43.180762    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:03:43.191087    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51098
	I0816 10:03:43.191558    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:03:43.191889    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:03:43.191899    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:03:43.192106    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:03:43.192302    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetMachineName
	I0816 10:03:43.192394    3758 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:03:43.192506    3758 start.go:159] libmachine.API.Create for "ha-286000" (driver="hyperkit")
	I0816 10:03:43.192526    3758 client.go:168] LocalClient.Create starting
	I0816 10:03:43.192559    3758 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem
	I0816 10:03:43.192606    3758 main.go:141] libmachine: Decoding PEM data...
	I0816 10:03:43.192620    3758 main.go:141] libmachine: Parsing certificate...
	I0816 10:03:43.192675    3758 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem
	I0816 10:03:43.192704    3758 main.go:141] libmachine: Decoding PEM data...
	I0816 10:03:43.192714    3758 main.go:141] libmachine: Parsing certificate...
	I0816 10:03:43.192726    3758 main.go:141] libmachine: Running pre-create checks...
	I0816 10:03:43.192731    3758 main.go:141] libmachine: (ha-286000-m03) Calling .PreCreateCheck
	I0816 10:03:43.192813    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:43.192833    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetConfigRaw
	I0816 10:03:43.202038    3758 main.go:141] libmachine: Creating machine...
	I0816 10:03:43.202062    3758 main.go:141] libmachine: (ha-286000-m03) Calling .Create
	I0816 10:03:43.202302    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:43.202574    3758 main.go:141] libmachine: (ha-286000-m03) DBG | I0816 10:03:43.202284    3848 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19461-1276/.minikube
	I0816 10:03:43.202705    3758 main.go:141] libmachine: (ha-286000-m03) Downloading /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19461-1276/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0816 10:03:43.529965    3758 main.go:141] libmachine: (ha-286000-m03) DBG | I0816 10:03:43.529902    3848 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/id_rsa...
	I0816 10:03:43.631057    3758 main.go:141] libmachine: (ha-286000-m03) DBG | I0816 10:03:43.630965    3848 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/ha-286000-m03.rawdisk...
	I0816 10:03:43.631074    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Writing magic tar header
	I0816 10:03:43.631083    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Writing SSH key tar header
	I0816 10:03:43.631711    3758 main.go:141] libmachine: (ha-286000-m03) DBG | I0816 10:03:43.631681    3848 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03 ...
	I0816 10:03:44.155270    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:44.155286    3758 main.go:141] libmachine: (ha-286000-m03) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/hyperkit.pid
	I0816 10:03:44.155318    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Using UUID 408efb18-ff91-428f-b414-6eb0d982a779
	I0816 10:03:44.181981    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Generated MAC 8a:e:de:5b:b5:8b
	I0816 10:03:44.182010    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000
	I0816 10:03:44.182059    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"408efb18-ff91-428f-b414-6eb0d982a779", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 10:03:44.182101    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"408efb18-ff91-428f-b414-6eb0d982a779", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 10:03:44.182228    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "408efb18-ff91-428f-b414-6eb0d982a779", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/ha-286000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/bzimage,/Users/jenkins/minikube-integration/19461-1276/.minikube/machine
s/ha-286000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000"}
	I0816 10:03:44.182306    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 408efb18-ff91-428f-b414-6eb0d982a779 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/ha-286000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/bzimage,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000"
	I0816 10:03:44.182332    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0816 10:03:44.185479    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 DEBUG: hyperkit: Pid is 3849
	I0816 10:03:44.185900    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Attempt 0
	I0816 10:03:44.185912    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:44.186025    3758 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid from json: 3849
	I0816 10:03:44.186994    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Searching for 8a:e:de:5b:b5:8b in /var/db/dhcpd_leases ...
	I0816 10:03:44.187062    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0816 10:03:44.187079    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0d7b0}
	I0816 10:03:44.187114    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:03:44.187142    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:03:44.187168    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:03:44.187185    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:03:44.193352    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0816 10:03:44.201781    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0816 10:03:44.202595    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:03:44.202610    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:03:44.202637    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:03:44.202653    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:03:44.588441    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0816 10:03:44.588458    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0816 10:03:44.703100    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:03:44.703117    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:03:44.703125    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:03:44.703135    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:03:44.703952    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0816 10:03:44.703963    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0816 10:03:46.188345    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Attempt 1
	I0816 10:03:46.188360    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:46.188480    3758 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid from json: 3849
	I0816 10:03:46.189255    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Searching for 8a:e:de:5b:b5:8b in /var/db/dhcpd_leases ...
	I0816 10:03:46.189306    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0816 10:03:46.189321    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0d7b0}
	I0816 10:03:46.189335    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:03:46.189352    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:03:46.189359    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:03:46.189385    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:03:48.190823    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Attempt 2
	I0816 10:03:48.190838    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:48.190916    3758 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid from json: 3849
	I0816 10:03:48.191692    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Searching for 8a:e:de:5b:b5:8b in /var/db/dhcpd_leases ...
	I0816 10:03:48.191747    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0816 10:03:48.191757    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0d7b0}
	I0816 10:03:48.191779    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:03:48.191787    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:03:48.191803    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:03:48.191815    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:03:50.191897    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Attempt 3
	I0816 10:03:50.191913    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:50.191977    3758 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid from json: 3849
	I0816 10:03:50.192744    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Searching for 8a:e:de:5b:b5:8b in /var/db/dhcpd_leases ...
	I0816 10:03:50.192793    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0816 10:03:50.192802    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0d7b0}
	I0816 10:03:50.192811    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:03:50.192820    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:03:50.192836    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:03:50.192850    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:03:50.310705    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:50 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0816 10:03:50.310770    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:50 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0816 10:03:50.310783    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:50 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0816 10:03:50.334054    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:50 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0816 10:03:52.193443    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Attempt 4
	I0816 10:03:52.193459    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:52.193535    3758 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid from json: 3849
	I0816 10:03:52.194319    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Searching for 8a:e:de:5b:b5:8b in /var/db/dhcpd_leases ...
	I0816 10:03:52.194367    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0816 10:03:52.194379    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0d7b0}
	I0816 10:03:52.194390    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:03:52.194399    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:03:52.194408    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:03:52.194419    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:03:54.195434    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Attempt 5
	I0816 10:03:54.195456    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:54.195540    3758 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid from json: 3849
	I0816 10:03:54.196406    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Searching for 8a:e:de:5b:b5:8b in /var/db/dhcpd_leases ...
	I0816 10:03:54.196510    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Found 6 entries in /var/db/dhcpd_leases!
	I0816 10:03:54.196530    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66c0d7f9}
	I0816 10:03:54.196551    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Found match: 8a:e:de:5b:b5:8b
	I0816 10:03:54.196553    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetConfigRaw
	I0816 10:03:54.196565    3758 main.go:141] libmachine: (ha-286000-m03) DBG | IP: 192.169.0.7
	I0816 10:03:54.197236    3758 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:03:54.197351    3758 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:03:54.197441    3758 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0816 10:03:54.197454    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetState
	I0816 10:03:54.197541    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:54.197611    3758 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid from json: 3849
	I0816 10:03:54.198439    3758 main.go:141] libmachine: Detecting operating system of created instance...
	I0816 10:03:54.198446    3758 main.go:141] libmachine: Waiting for SSH to be available...
	I0816 10:03:54.198450    3758 main.go:141] libmachine: Getting to WaitForSSH function...
	I0816 10:03:54.198454    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:54.198532    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:54.198612    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:54.198695    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:54.198783    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:54.198892    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:03:54.199063    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:03:54.199071    3758 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0816 10:03:55.266165    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 10:03:55.266179    3758 main.go:141] libmachine: Detecting the provisioner...
	I0816 10:03:55.266185    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:55.266318    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:55.266413    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.266507    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.266601    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:55.266731    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:03:55.266879    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:03:55.266886    3758 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0816 10:03:55.332778    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0816 10:03:55.332822    3758 main.go:141] libmachine: found compatible host: buildroot
	I0816 10:03:55.332828    3758 main.go:141] libmachine: Provisioning with buildroot...
	I0816 10:03:55.332834    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetMachineName
	I0816 10:03:55.332971    3758 buildroot.go:166] provisioning hostname "ha-286000-m03"
	I0816 10:03:55.332982    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetMachineName
	I0816 10:03:55.333079    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:55.333186    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:55.333277    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.333375    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.333463    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:55.333593    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:03:55.333737    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:03:55.333746    3758 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-286000-m03 && echo "ha-286000-m03" | sudo tee /etc/hostname
	I0816 10:03:55.411701    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-286000-m03
	
	I0816 10:03:55.411715    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:55.411851    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:55.411965    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.412057    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.412140    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:55.412274    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:03:55.412418    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:03:55.412429    3758 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-286000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-286000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-286000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 10:03:55.485102    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 10:03:55.485118    3758 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19461-1276/.minikube CaCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19461-1276/.minikube}
	I0816 10:03:55.485127    3758 buildroot.go:174] setting up certificates
	I0816 10:03:55.485134    3758 provision.go:84] configureAuth start
	I0816 10:03:55.485141    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetMachineName
	I0816 10:03:55.485284    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetIP
	I0816 10:03:55.485375    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:55.485445    3758 provision.go:143] copyHostCerts
	I0816 10:03:55.485474    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem
	I0816 10:03:55.485527    3758 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem, removing ...
	I0816 10:03:55.485533    3758 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem
	I0816 10:03:55.485654    3758 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem (1123 bytes)
	I0816 10:03:55.485864    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem
	I0816 10:03:55.485895    3758 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem, removing ...
	I0816 10:03:55.485900    3758 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem
	I0816 10:03:55.486003    3758 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem (1679 bytes)
	I0816 10:03:55.486172    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem
	I0816 10:03:55.486206    3758 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem, removing ...
	I0816 10:03:55.486215    3758 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem
	I0816 10:03:55.486286    3758 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem (1078 bytes)
	I0816 10:03:55.486435    3758 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem org=jenkins.ha-286000-m03 san=[127.0.0.1 192.169.0.7 ha-286000-m03 localhost minikube]
	I0816 10:03:55.572803    3758 provision.go:177] copyRemoteCerts
	I0816 10:03:55.572855    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 10:03:55.572869    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:55.573015    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:55.573114    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.573208    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:55.573304    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/id_rsa Username:docker}
	I0816 10:03:55.612104    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0816 10:03:55.612186    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0816 10:03:55.632484    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0816 10:03:55.632548    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 10:03:55.652677    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0816 10:03:55.652752    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0816 10:03:55.672555    3758 provision.go:87] duration metric: took 187.4165ms to configureAuth
	I0816 10:03:55.672568    3758 buildroot.go:189] setting minikube options for container-runtime
	I0816 10:03:55.672735    3758 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:03:55.672748    3758 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:03:55.672889    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:55.672982    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:55.673071    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.673157    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.673245    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:55.673371    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:03:55.673496    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:03:55.673504    3758 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0816 10:03:55.738053    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0816 10:03:55.738064    3758 buildroot.go:70] root file system type: tmpfs
	I0816 10:03:55.738156    3758 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0816 10:03:55.738167    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:55.738306    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:55.738397    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.738489    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.738573    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:55.738694    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:03:55.738841    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:03:55.738890    3758 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0816 10:03:55.813774    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0816 10:03:55.813790    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:55.813924    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:55.814019    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.814103    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.814193    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:55.814320    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:03:55.814470    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:03:55.814484    3758 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0816 10:03:57.356529    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0816 10:03:57.356544    3758 main.go:141] libmachine: Checking connection to Docker...
	I0816 10:03:57.356549    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetURL
	I0816 10:03:57.356692    3758 main.go:141] libmachine: Docker is up and running!
	I0816 10:03:57.356700    3758 main.go:141] libmachine: Reticulating splines...
	I0816 10:03:57.356705    3758 client.go:171] duration metric: took 14.164443579s to LocalClient.Create
	I0816 10:03:57.356717    3758 start.go:167] duration metric: took 14.164482312s to libmachine.API.Create "ha-286000"
	I0816 10:03:57.356727    3758 start.go:293] postStartSetup for "ha-286000-m03" (driver="hyperkit")
	I0816 10:03:57.356734    3758 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 10:03:57.356744    3758 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:03:57.356919    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 10:03:57.356935    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:57.357030    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:57.357130    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:57.357273    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:57.357390    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/id_rsa Username:docker}
	I0816 10:03:57.398231    3758 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 10:03:57.402472    3758 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 10:03:57.402483    3758 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19461-1276/.minikube/addons for local assets ...
	I0816 10:03:57.402571    3758 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19461-1276/.minikube/files for local assets ...
	I0816 10:03:57.402723    3758 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> 18312.pem in /etc/ssl/certs
	I0816 10:03:57.402729    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> /etc/ssl/certs/18312.pem
	I0816 10:03:57.402895    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 10:03:57.412226    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem --> /etc/ssl/certs/18312.pem (1708 bytes)
	I0816 10:03:57.443242    3758 start.go:296] duration metric: took 86.507779ms for postStartSetup
	I0816 10:03:57.443268    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetConfigRaw
	I0816 10:03:57.443871    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetIP
	I0816 10:03:57.444031    3758 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:03:57.444386    3758 start.go:128] duration metric: took 14.284913269s to createHost
	I0816 10:03:57.444401    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:57.444490    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:57.444568    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:57.444658    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:57.444732    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:57.444842    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:03:57.444963    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:03:57.444971    3758 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 10:03:57.509634    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723827837.586061636
	
	I0816 10:03:57.509648    3758 fix.go:216] guest clock: 1723827837.586061636
	I0816 10:03:57.509653    3758 fix.go:229] Guest: 2024-08-16 10:03:57.586061636 -0700 PDT Remote: 2024-08-16 10:03:57.444395 -0700 PDT m=+129.370490857 (delta=141.666636ms)
	I0816 10:03:57.509665    3758 fix.go:200] guest clock delta is within tolerance: 141.666636ms
	I0816 10:03:57.509669    3758 start.go:83] releasing machines lock for "ha-286000-m03", held for 14.350339851s
	I0816 10:03:57.509691    3758 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:03:57.509832    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetIP
	I0816 10:03:57.542277    3758 out.go:177] * Found network options:
	I0816 10:03:57.562266    3758 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6
	W0816 10:03:57.583040    3758 proxy.go:119] fail to check proxy env: Error ip not in block
	W0816 10:03:57.583073    3758 proxy.go:119] fail to check proxy env: Error ip not in block
	I0816 10:03:57.583092    3758 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:03:57.583964    3758 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:03:57.584310    3758 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:03:57.584469    3758 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 10:03:57.584522    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	W0816 10:03:57.584570    3758 proxy.go:119] fail to check proxy env: Error ip not in block
	W0816 10:03:57.584596    3758 proxy.go:119] fail to check proxy env: Error ip not in block
	I0816 10:03:57.584708    3758 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0816 10:03:57.584735    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:57.584813    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:57.584995    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:57.585023    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:57.585164    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:57.585195    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:57.585338    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/id_rsa Username:docker}
	I0816 10:03:57.585406    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:57.585566    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/id_rsa Username:docker}
	W0816 10:03:57.623481    3758 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 10:03:57.623541    3758 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 10:03:57.670278    3758 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 10:03:57.670298    3758 start.go:495] detecting cgroup driver to use...
	I0816 10:03:57.670408    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 10:03:57.686134    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0816 10:03:57.694681    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0816 10:03:57.703134    3758 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0816 10:03:57.703198    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0816 10:03:57.711736    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 10:03:57.720041    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0816 10:03:57.728421    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 10:03:57.737252    3758 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 10:03:57.746356    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0816 10:03:57.755117    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0816 10:03:57.763962    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0816 10:03:57.772639    3758 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 10:03:57.780684    3758 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 10:03:57.788938    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:03:57.893932    3758 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0816 10:03:57.913150    3758 start.go:495] detecting cgroup driver to use...
	I0816 10:03:57.913224    3758 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0816 10:03:57.932274    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 10:03:57.945000    3758 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 10:03:57.965222    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 10:03:57.977460    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 10:03:57.988259    3758 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0816 10:03:58.006552    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 10:03:58.017261    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 10:03:58.032545    3758 ssh_runner.go:195] Run: which cri-dockerd
	I0816 10:03:58.035464    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0816 10:03:58.042742    3758 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0816 10:03:58.056747    3758 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0816 10:03:58.163471    3758 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0816 10:03:58.281020    3758 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0816 10:03:58.281044    3758 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0816 10:03:58.294992    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:03:58.399122    3758 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0816 10:04:59.415916    3758 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.017935847s)
	I0816 10:04:59.415981    3758 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0816 10:04:59.453653    3758 out.go:201] 
	W0816 10:04:59.474040    3758 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 16 17:03:56 ha-286000-m03 systemd[1]: Starting Docker Application Container Engine...
	Aug 16 17:03:56 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:56.200976866Z" level=info msg="Starting up"
	Aug 16 17:03:56 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:56.201455258Z" level=info msg="containerd not running, starting managed containerd"
	Aug 16 17:03:56 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:56.201908035Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=519
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.218394236Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.233514110Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.233584256Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.233654513Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.233690141Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.233768300Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.233806284Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.233955562Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.233996222Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.234026670Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.234054929Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.234137090Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.234382642Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.235934793Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.235985918Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.236117022Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.236159609Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.236256962Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.236327154Z" level=info msg="metadata content store policy set" policy=shared
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.238422470Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.238509436Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.238555940Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.238594343Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.238633954Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.238734892Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.238944917Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239083528Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239123172Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239153894Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239184820Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239214617Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239248728Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239285992Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239333696Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239368624Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239411950Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239451554Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239490333Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239523121Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239554681Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239589296Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239621390Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239653930Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239683266Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239712579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239742056Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239772828Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239801901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239830636Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239859570Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239893189Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239929555Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239960582Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239991787Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240076099Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240121809Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240153088Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240182438Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240210918Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240240067Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240268586Z" level=info msg="NRI interface is disabled by configuration."
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240520719Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240609661Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240710485Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240795617Z" level=info msg="containerd successfully booted in 0.023088s"
	Aug 16 17:03:57 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:57.221812848Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 16 17:03:57 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:57.228854621Z" level=info msg="Loading containers: start."
	Aug 16 17:03:57 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:57.312023793Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 16 17:03:57 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:57.390459780Z" level=info msg="Loading containers: done."
	Aug 16 17:03:57 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:57.404746652Z" level=info msg="Docker daemon" commit=f9522e5 containerd-snapshotter=false storage-driver=overlay2 version=27.1.2
	Aug 16 17:03:57 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:57.404864935Z" level=info msg="Daemon has completed initialization"
	Aug 16 17:03:57 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:57.430646618Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 16 17:03:57 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:57.430810646Z" level=info msg="API listen on [::]:2376"
	Aug 16 17:03:57 ha-286000-m03 systemd[1]: Started Docker Application Container Engine.
	Aug 16 17:03:58 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:58.488220464Z" level=info msg="Processing signal 'terminated'"
	Aug 16 17:03:58 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:58.489144852Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 16 17:03:58 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:58.489473398Z" level=info msg="Daemon shutdown complete"
	Aug 16 17:03:58 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:58.489515121Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 16 17:03:58 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:58.489527809Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 16 17:03:58 ha-286000-m03 systemd[1]: Stopping Docker Application Container Engine...
	Aug 16 17:03:59 ha-286000-m03 systemd[1]: docker.service: Deactivated successfully.
	Aug 16 17:03:59 ha-286000-m03 systemd[1]: Stopped Docker Application Container Engine.
	Aug 16 17:03:59 ha-286000-m03 systemd[1]: Starting Docker Application Container Engine...
	Aug 16 17:03:59 ha-286000-m03 dockerd[913]: time="2024-08-16T17:03:59.520198872Z" level=info msg="Starting up"
	Aug 16 17:04:59 ha-286000-m03 dockerd[913]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 16 17:04:59 ha-286000-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 16 17:04:59 ha-286000-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 16 17:04:59 ha-286000-m03 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0816 10:04:59.474111    3758 out.go:270] * 
	W0816 10:04:59.474938    3758 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 10:04:59.558192    3758 out.go:201] 
	
	
	==> Docker <==
	Aug 16 17:02:50 ha-286000 dockerd[1241]: time="2024-08-16T17:02:50.631803753Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:02:50 ha-286000 dockerd[1241]: time="2024-08-16T17:02:50.636334140Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 16 17:02:50 ha-286000 dockerd[1241]: time="2024-08-16T17:02:50.636542015Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 16 17:02:50 ha-286000 dockerd[1241]: time="2024-08-16T17:02:50.636768594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:02:50 ha-286000 dockerd[1241]: time="2024-08-16T17:02:50.637206626Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:02:50 ha-286000 cri-dockerd[1134]: time="2024-08-16T17:02:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fbd84fb813c9034ce56be933a9dc0c8539c5c831abbd163996da762065f0c208/resolv.conf as [nameserver 192.169.0.1]"
	Aug 16 17:02:50 ha-286000 cri-dockerd[1134]: time="2024-08-16T17:02:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/452942e267927b3a15327ece33bffe6fb305db22e6a72ff9b65d4acfe89f3891/resolv.conf as [nameserver 192.169.0.1]"
	Aug 16 17:02:50 ha-286000 dockerd[1241]: time="2024-08-16T17:02:50.842515621Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 16 17:02:50 ha-286000 dockerd[1241]: time="2024-08-16T17:02:50.842901741Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 16 17:02:50 ha-286000 dockerd[1241]: time="2024-08-16T17:02:50.843146100Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:02:50 ha-286000 dockerd[1241]: time="2024-08-16T17:02:50.843415719Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:02:50 ha-286000 dockerd[1241]: time="2024-08-16T17:02:50.885181891Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 16 17:02:50 ha-286000 dockerd[1241]: time="2024-08-16T17:02:50.885227629Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 16 17:02:50 ha-286000 dockerd[1241]: time="2024-08-16T17:02:50.885240492Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:02:50 ha-286000 dockerd[1241]: time="2024-08-16T17:02:50.885438543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:05:03 ha-286000 dockerd[1241]: time="2024-08-16T17:05:03.188642191Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 16 17:05:03 ha-286000 dockerd[1241]: time="2024-08-16T17:05:03.188710762Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 16 17:05:03 ha-286000 dockerd[1241]: time="2024-08-16T17:05:03.188723920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:05:03 ha-286000 dockerd[1241]: time="2024-08-16T17:05:03.188799320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:05:03 ha-286000 cri-dockerd[1134]: time="2024-08-16T17:05:03Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1873ade92edb9d51940849fdee8cb6db41b03368956580ec6099a918aff580e1/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Aug 16 17:05:04 ha-286000 cri-dockerd[1134]: time="2024-08-16T17:05:04Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Aug 16 17:05:04 ha-286000 dockerd[1241]: time="2024-08-16T17:05:04.522783748Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 16 17:05:04 ha-286000 dockerd[1241]: time="2024-08-16T17:05:04.522878436Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 16 17:05:04 ha-286000 dockerd[1241]: time="2024-08-16T17:05:04.522904596Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:05:04 ha-286000 dockerd[1241]: time="2024-08-16T17:05:04.523003751Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5bbe7a8cc492e       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   12 minutes ago      Running             busybox                   0                   1873ade92edb9       busybox-7dff88458-dvmvk
	bcd7170b050a5       cbb01a7bd410d                                                                                         15 minutes ago      Running             coredns                   0                   452942e267927       coredns-6f6b679f8f-rfbz7
	60d3d03e297ce       cbb01a7bd410d                                                                                         15 minutes ago      Running             coredns                   0                   fbd84fb813c90       coredns-6f6b679f8f-2kqjf
	f55b59f53c6eb       6e38f40d628db                                                                                         15 minutes ago      Running             storage-provisioner       0                   482990a4b00e6       storage-provisioner
	2677394dcd66f       kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166              15 minutes ago      Running             kindnet-cni               0                   afaf657e85c32       kindnet-whqxb
	81f6c96d46494       ad83b2ca7b09e                                                                                         15 minutes ago      Running             kube-proxy                0                   dfb822fc27b5e       kube-proxy-w4nt2
	cafa34c562392       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     15 minutes ago      Running             kube-vip                  0                   69fba128b04a6       kube-vip-ha-286000
	8f5867ee99d9b       045733566833c                                                                                         15 minutes ago      Running             kube-controller-manager   0                   e87fd3e77384f       kube-controller-manager-ha-286000
	f7b2e9efdd94f       1766f54c897f0                                                                                         15 minutes ago      Running             kube-scheduler            0                   8e2b186980aff       kube-scheduler-ha-286000
	d8dadff6cec78       2e96e5913fc06                                                                                         15 minutes ago      Running             etcd                      0                   cdb14ff7d8896       etcd-ha-286000
	5f3adc202a7a2       604f5db92eaa8                                                                                         15 minutes ago      Running             kube-apiserver            0                   818ee6dafe6c9       kube-apiserver-ha-286000
	
	
	==> coredns [60d3d03e297c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:37081 - 62351 "HINFO IN 7437422972060865489.3041931607585121070. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.009759141s
	[INFO] 10.244.0.4:34542 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.00094119s
	[INFO] 10.244.0.4:38912 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 0.000921826s
	[INFO] 10.244.1.2:39585 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000070042s
	[INFO] 10.244.0.4:39673 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000101817s
	[INFO] 10.244.0.4:55820 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000225412s
	[INFO] 10.244.1.2:48427 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135793s
	[INFO] 10.244.1.2:33204 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.000791531s
	[INFO] 10.244.1.2:51238 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000081236s
	[INFO] 10.244.1.2:42705 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000135743s
	[INFO] 10.244.1.2:33254 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000106295s
	[INFO] 10.244.0.4:53900 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000067988s
	[INFO] 10.244.1.2:48994 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00006771s
	[INFO] 10.244.1.2:56734 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000138883s
	[INFO] 10.244.0.4:53039 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000056881s
	[INFO] 10.244.0.4:47474 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000052458s
	[INFO] 10.244.1.2:35027 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000113257s
	[INFO] 10.244.1.2:60680 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000087182s
	[INFO] 10.244.1.2:36287 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000080566s
	
	
	==> coredns [bcd7170b050a] <==
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:47813 - 35687 "HINFO IN 8431179596625010309.1478595720938230641. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.009077429s
	[INFO] 10.244.0.4:47786 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132847s
	[INFO] 10.244.0.4:40096 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.001354873s
	[INFO] 10.244.1.2:40884 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000096214s
	[INFO] 10.244.1.2:55655 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000050276s
	[INFO] 10.244.1.2:47690 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000614458s
	[INFO] 10.244.0.4:45344 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000076791s
	[INFO] 10.244.0.4:53101 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.001042655s
	[INFO] 10.244.0.4:43889 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000082483s
	[INFO] 10.244.0.4:59210 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.00074308s
	[INFO] 10.244.0.4:38429 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00010233s
	[INFO] 10.244.0.4:48679 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00007214s
	[INFO] 10.244.1.2:43879 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000062278s
	[INFO] 10.244.1.2:45902 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000100209s
	[INFO] 10.244.1.2:39740 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000071593s
	[INFO] 10.244.0.4:50878 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000077864s
	[INFO] 10.244.0.4:51260 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000078903s
	[INFO] 10.244.0.4:38206 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000049117s
	[INFO] 10.244.1.2:54952 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000071797s
	[INFO] 10.244.1.2:36478 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000063757s
	[INFO] 10.244.0.4:43240 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000066616s
	[INFO] 10.244.0.4:60894 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000052873s
	[INFO] 10.244.1.2:43932 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00008816s
	
	
	==> describe nodes <==
	Name:               ha-286000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-286000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd
	                    minikube.k8s.io/name=ha-286000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_16T10_02_26_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 17:02:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-286000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 17:17:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 17:15:42 +0000   Fri, 16 Aug 2024 17:02:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 17:15:42 +0000   Fri, 16 Aug 2024 17:02:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 17:15:42 +0000   Fri, 16 Aug 2024 17:02:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 17:15:42 +0000   Fri, 16 Aug 2024 17:02:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.5
	  Hostname:    ha-286000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 a56c711da9894c3c86f2a7af9fec2b53
	  System UUID:                ad96408c-0000-0000-89eb-d74e5b68d297
	  Boot ID:                    23e4d4eb-a603-45bf-aca4-fb0893407f5a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-dvmvk              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 coredns-6f6b679f8f-2kqjf             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     15m
	  kube-system                 coredns-6f6b679f8f-rfbz7             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     15m
	  kube-system                 etcd-ha-286000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         15m
	  kube-system                 kindnet-whqxb                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      15m
	  kube-system                 kube-apiserver-ha-286000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-ha-286000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-w4nt2                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-ha-286000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-vip-ha-286000                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15m                kube-proxy       
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node ha-286000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node ha-286000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node ha-286000 status is now: NodeHasSufficientPID
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m                kubelet          Node ha-286000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m                kubelet          Node ha-286000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m                kubelet          Node ha-286000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m                node-controller  Node ha-286000 event: Registered Node ha-286000 in Controller
	  Normal  NodeReady                15m                kubelet          Node ha-286000 status is now: NodeReady
	  Normal  RegisteredNode           14m                node-controller  Node ha-286000 event: Registered Node ha-286000 in Controller
	
	
	Name:               ha-286000-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-286000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd
	                    minikube.k8s.io/name=ha-286000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_16T10_03_22_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 17:03:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-286000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 17:17:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 17:15:35 +0000   Fri, 16 Aug 2024 17:03:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 17:15:35 +0000   Fri, 16 Aug 2024 17:03:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 17:15:35 +0000   Fri, 16 Aug 2024 17:03:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 17:15:35 +0000   Fri, 16 Aug 2024 17:03:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.6
	  Hostname:    ha-286000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 42bf4a1b451f44ad925f50a6a94e4cff
	  System UUID:                f7634935-0000-0000-a751-ac72999da031
	  Boot ID:                    e82d142e-37b9-4938-81bc-f5bc1a2db23f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-k9m92                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 etcd-ha-286000-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         14m
	  kube-system                 kindnet-t9kjf                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      14m
	  kube-system                 kube-apiserver-ha-286000-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-ha-286000-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-pt669                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-ha-286000-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-vip-ha-286000-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14m                kube-proxy       
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)  kubelet          Node ha-286000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet          Node ha-286000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)  kubelet          Node ha-286000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           14m                node-controller  Node ha-286000-m02 event: Registered Node ha-286000-m02 in Controller
	  Normal  RegisteredNode           14m                node-controller  Node ha-286000-m02 event: Registered Node ha-286000-m02 in Controller
	
	
	Name:               ha-286000-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-286000-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd
	                    minikube.k8s.io/name=ha-286000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_16T10_17_22_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 17:17:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-286000-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 17:17:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 17:17:43 +0000   Fri, 16 Aug 2024 17:17:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 17:17:43 +0000   Fri, 16 Aug 2024 17:17:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 17:17:43 +0000   Fri, 16 Aug 2024 17:17:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 17:17:43 +0000   Fri, 16 Aug 2024 17:17:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.8
	  Hostname:    ha-286000-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 324c7ca05f77443abc4e861a3d5a5224
	  System UUID:                9a6645c6-0000-0000-8cbd-49b6a6a0383b
	  Boot ID:                    839ab079-775d-4939-ac8e-9fb255ba29df
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-99xmp    0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kindnet-b9r6s              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      29s
	  kube-system                 kube-proxy-5qhgk           0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 22s                kube-proxy       
	  Normal  NodeHasSufficientMemory  30s (x2 over 30s)  kubelet          Node ha-286000-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s (x2 over 30s)  kubelet          Node ha-286000-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s (x2 over 30s)  kubelet          Node ha-286000-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  30s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           27s                node-controller  Node ha-286000-m04 event: Registered Node ha-286000-m04 in Controller
	  Normal  RegisteredNode           26s                node-controller  Node ha-286000-m04 event: Registered Node ha-286000-m04 in Controller
	  Normal  NodeReady                7s                 kubelet          Node ha-286000-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.006640] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.760553] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +1.351722] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000003] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Aug16 17:02] systemd-fstab-generator[519]: Ignoring "noauto" option for root device
	[  +0.114039] systemd-fstab-generator[531]: Ignoring "noauto" option for root device
	[  +1.207693] kauditd_printk_skb: 42 callbacks suppressed
	[  +0.611653] systemd-fstab-generator[786]: Ignoring "noauto" option for root device
	[  +0.265175] systemd-fstab-generator[846]: Ignoring "noauto" option for root device
	[  +0.101006] systemd-fstab-generator[858]: Ignoring "noauto" option for root device
	[  +0.123308] systemd-fstab-generator[872]: Ignoring "noauto" option for root device
	[  +2.497195] systemd-fstab-generator[1086]: Ignoring "noauto" option for root device
	[  +0.110299] systemd-fstab-generator[1098]: Ignoring "noauto" option for root device
	[  +0.095670] systemd-fstab-generator[1110]: Ignoring "noauto" option for root device
	[  +0.122299] systemd-fstab-generator[1126]: Ignoring "noauto" option for root device
	[  +3.636333] systemd-fstab-generator[1227]: Ignoring "noauto" option for root device
	[  +0.056190] kauditd_printk_skb: 233 callbacks suppressed
	[  +2.505796] systemd-fstab-generator[1479]: Ignoring "noauto" option for root device
	[  +3.634449] systemd-fstab-generator[1611]: Ignoring "noauto" option for root device
	[  +0.055756] kauditd_printk_skb: 70 callbacks suppressed
	[  +6.910973] systemd-fstab-generator[2107]: Ignoring "noauto" option for root device
	[  +0.079351] kauditd_printk_skb: 72 callbacks suppressed
	[  +9.264773] kauditd_printk_skb: 51 callbacks suppressed
	[Aug16 17:03] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [d8dadff6cec7] <==
	{"level":"info","ts":"2024-08-16T17:03:20.602718Z","caller":"rafthttp/transport.go:317","msg":"added remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"9633c02797b6d34","remote-peer-urls":["https://192.169.0.6:2380"]}
	{"level":"info","ts":"2024-08-16T17:03:20.606154Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"9633c02797b6d34"}
	{"level":"info","ts":"2024-08-16T17:03:20.606173Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"9633c02797b6d34"}
	{"level":"info","ts":"2024-08-16T17:03:20.606385Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"9633c02797b6d34"}
	{"level":"info","ts":"2024-08-16T17:03:20.607792Z","caller":"etcdserver/server.go:1996","msg":"applied a configuration change through raft","local-member-id":"b8c6c7563d17d844","raft-conf-change":"ConfChangeAddLearnerNode","raft-conf-change-node-id":"9633c02797b6d34"}
	{"level":"info","ts":"2024-08-16T17:03:21.553074Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"9633c02797b6d34"}
	{"level":"info","ts":"2024-08-16T17:03:21.553169Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"9633c02797b6d34"}
	{"level":"info","ts":"2024-08-16T17:03:21.561367Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"9633c02797b6d34","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-08-16T17:03:21.561449Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"9633c02797b6d34"}
	{"level":"info","ts":"2024-08-16T17:03:21.561528Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"9633c02797b6d34"}
	{"level":"info","ts":"2024-08-16T17:03:21.571776Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"9633c02797b6d34","stream-type":"stream Message"}
	{"level":"info","ts":"2024-08-16T17:03:21.571882Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"9633c02797b6d34"}
	{"level":"info","ts":"2024-08-16T17:03:22.121813Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 switched to configuration voters=(676450350361439540 13314548521573537860)"}
	{"level":"info","ts":"2024-08-16T17:03:22.121979Z","caller":"membership/cluster.go:535","msg":"promote member","cluster-id":"b73189effde9bc63","local-member-id":"b8c6c7563d17d844"}
	{"level":"info","ts":"2024-08-16T17:03:22.122011Z","caller":"etcdserver/server.go:1996","msg":"applied a configuration change through raft","local-member-id":"b8c6c7563d17d844","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"9633c02797b6d34"}
	{"level":"warn","ts":"2024-08-16T17:05:03.077229Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.58419ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets/default/busybox-7dff88458\" ","response":"range_response_count:1 size:1985"}
	{"level":"info","ts":"2024-08-16T17:05:03.077410Z","caller":"traceutil/trace.go:171","msg":"trace[51649115] range","detail":"{range_begin:/registry/replicasets/default/busybox-7dff88458; range_end:; response_count:1; response_revision:920; }","duration":"112.799262ms","start":"2024-08-16T17:05:02.964599Z","end":"2024-08-16T17:05:03.077398Z","steps":["trace[51649115] 'agreement among raft nodes before linearized reading'  (duration: 72.831989ms)","trace[51649115] 'range keys from in-memory index tree'  (duration: 39.723026ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-16T17:05:03.078503Z","caller":"traceutil/trace.go:171","msg":"trace[1276232025] transaction","detail":"{read_only:false; response_revision:921; number_of_response:1; }","duration":"116.265354ms","start":"2024-08-16T17:05:02.962223Z","end":"2024-08-16T17:05:03.078489Z","steps":["trace[1276232025] 'process raft request'  (duration: 75.236605ms)","trace[1276232025] 'compare'  (duration: 39.643768ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-16T17:12:20.455094Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1242}
	{"level":"info","ts":"2024-08-16T17:12:20.478439Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1242,"took":"22.921055ms","hash":729855672,"current-db-size-bytes":3137536,"current-db-size":"3.1 MB","current-db-size-in-use-bytes":1589248,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-08-16T17:12:20.478900Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":729855672,"revision":1242,"compact-revision":-1}
	{"level":"info","ts":"2024-08-16T17:17:20.461022Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1872}
	{"level":"info","ts":"2024-08-16T17:17:20.477338Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1872,"took":"15.709672ms","hash":3229853735,"current-db-size-bytes":3137536,"current-db-size":"3.1 MB","current-db-size-in-use-bytes":1396736,"current-db-size-in-use":"1.4 MB"}
	{"level":"info","ts":"2024-08-16T17:17:20.478748Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3229853735,"revision":1872,"compact-revision":1242}
	{"level":"info","ts":"2024-08-16T17:17:50.421877Z","caller":"traceutil/trace.go:171","msg":"trace[2006150280] transaction","detail":"{read_only:false; response_revision:2663; number_of_response:1; }","duration":"106.416528ms","start":"2024-08-16T17:17:50.315444Z","end":"2024-08-16T17:17:50.421861Z","steps":["trace[2006150280] 'process raft request'  (duration: 106.325161ms)"],"step_count":1}
	
	
	==> kernel <==
	 17:17:51 up 16 min,  0 users,  load average: 0.69, 0.32, 0.19
	Linux ha-286000 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [2677394dcd66] <==
	I0816 17:17:05.222810       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0816 17:17:05.222824       1 main.go:322] Node ha-286000-m02 has CIDR [10.244.1.0/24] 
	I0816 17:17:15.231227       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0816 17:17:15.231402       1 main.go:322] Node ha-286000-m02 has CIDR [10.244.1.0/24] 
	I0816 17:17:15.231537       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0816 17:17:15.231677       1 main.go:299] handling current node
	I0816 17:17:25.222718       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0816 17:17:25.223104       1 main.go:322] Node ha-286000-m04 has CIDR [10.244.2.0/24] 
	I0816 17:17:25.223665       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 192.169.0.8 Flags: [] Table: 0} 
	I0816 17:17:25.224064       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0816 17:17:25.224221       1 main.go:299] handling current node
	I0816 17:17:25.224408       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0816 17:17:25.224648       1 main.go:322] Node ha-286000-m02 has CIDR [10.244.1.0/24] 
	I0816 17:17:35.223909       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0816 17:17:35.224305       1 main.go:299] handling current node
	I0816 17:17:35.224566       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0816 17:17:35.224845       1 main.go:322] Node ha-286000-m02 has CIDR [10.244.1.0/24] 
	I0816 17:17:35.225222       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0816 17:17:35.225388       1 main.go:322] Node ha-286000-m04 has CIDR [10.244.2.0/24] 
	I0816 17:17:45.223708       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0816 17:17:45.223959       1 main.go:322] Node ha-286000-m04 has CIDR [10.244.2.0/24] 
	I0816 17:17:45.224161       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0816 17:17:45.224326       1 main.go:299] handling current node
	I0816 17:17:45.224456       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0816 17:17:45.224531       1 main.go:322] Node ha-286000-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [5f3adc202a7a] <==
	I0816 17:02:23.102546       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0816 17:02:23.105482       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0816 17:02:23.105513       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0816 17:02:23.438732       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0816 17:02:23.465745       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0816 17:02:23.506778       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0816 17:02:23.510486       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.5]
	I0816 17:02:23.511071       1 controller.go:615] quota admission added evaluator for: endpoints
	I0816 17:02:23.513769       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0816 17:02:24.114339       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0816 17:02:25.748425       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0816 17:02:25.762462       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0816 17:02:25.770706       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0816 17:02:29.466806       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0816 17:02:29.715365       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0816 17:16:53.377658       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:51358: use of closed network connection
	E0816 17:16:53.575639       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:51360: use of closed network connection
	E0816 17:16:53.910337       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:51365: use of closed network connection
	E0816 17:16:54.106751       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:51367: use of closed network connection
	E0816 17:16:54.415124       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:51372: use of closed network connection
	E0816 17:16:54.604288       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:51374: use of closed network connection
	E0816 17:16:57.856178       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:51405: use of closed network connection
	E0816 17:16:58.061897       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:51407: use of closed network connection
	E0816 17:16:58.252131       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:51409: use of closed network connection
	E0816 17:16:58.440772       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:51411: use of closed network connection
	
	
	==> kube-controller-manager [8f5867ee99d9] <==
	I0816 17:10:35.744951       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000"
	I0816 17:15:35.199030       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000-m02"
	I0816 17:15:42.092912       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000"
	I0816 17:17:21.952833       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-286000-m04\" does not exist"
	I0816 17:17:21.970197       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-286000-m04" podCIDRs=["10.244.2.0/24"]
	I0816 17:17:21.970254       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000-m04"
	I0816 17:17:21.970272       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000-m04"
	I0816 17:17:21.970425       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000-m04"
	I0816 17:17:22.057762       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="38.461µs"
	I0816 17:17:22.062866       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000-m04"
	I0816 17:17:22.339566       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000-m04"
	I0816 17:17:23.372821       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000-m04"
	I0816 17:17:24.020984       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000-m04"
	I0816 17:17:24.021820       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-286000-m04"
	I0816 17:17:24.091142       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000-m04"
	I0816 17:17:32.146156       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000-m04"
	I0816 17:17:44.596616       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000-m04"
	I0816 17:17:44.599585       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-286000-m04"
	I0816 17:17:44.604245       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000-m04"
	I0816 17:17:44.612723       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="57.66µs"
	I0816 17:17:44.622735       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="33.854µs"
	I0816 17:17:44.628514       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="41.171µs"
	I0816 17:17:46.710170       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="4.809826ms"
	I0816 17:17:46.710765       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="32.942µs"
	I0816 17:17:48.294811       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000-m04"
	
	
	==> kube-proxy [81f6c96d4649] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0816 17:02:30.214569       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0816 17:02:30.222978       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E0816 17:02:30.223011       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0816 17:02:30.267797       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0816 17:02:30.267825       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0816 17:02:30.267843       1 server_linux.go:169] "Using iptables Proxier"
	I0816 17:02:30.270154       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0816 17:02:30.270311       1 server.go:483] "Version info" version="v1.31.0"
	I0816 17:02:30.270319       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 17:02:30.271352       1 config.go:197] "Starting service config controller"
	I0816 17:02:30.271358       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0816 17:02:30.271372       1 config.go:104] "Starting endpoint slice config controller"
	I0816 17:02:30.271375       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0816 17:02:30.271598       1 config.go:326] "Starting node config controller"
	I0816 17:02:30.271603       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0816 17:02:30.371824       1 shared_informer.go:320] Caches are synced for node config
	I0816 17:02:30.371859       1 shared_informer.go:320] Caches are synced for service config
	I0816 17:02:30.371867       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [f7b2e9efdd94] <==
	W0816 17:02:22.176847       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0816 17:02:22.176852       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 17:02:22.176967       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0816 17:02:22.176999       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0816 17:02:22.177013       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0816 17:02:22.179079       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0816 17:02:22.179253       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0816 17:02:22.179322       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0816 17:02:23.072963       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0816 17:02:23.073256       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 17:02:23.081000       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0816 17:02:23.081176       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 17:02:23.142220       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0816 17:02:23.142263       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 17:02:23.166962       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0816 17:02:23.167003       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 17:02:23.255186       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 17:02:23.255230       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 17:02:23.456273       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0816 17:02:23.456447       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0816 17:02:26.460413       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0816 17:05:02.827326       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-k9m92\": pod busybox-7dff88458-k9m92 is already assigned to node \"ha-286000-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-k9m92" node="ha-286000-m02"
	E0816 17:05:02.827387       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod a516859c-0da8-4ba9-a896-ac720495818f(default/busybox-7dff88458-k9m92) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-k9m92"
	E0816 17:05:02.827403       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-k9m92\": pod busybox-7dff88458-k9m92 is already assigned to node \"ha-286000-m02\"" pod="default/busybox-7dff88458-k9m92"
	I0816 17:05:02.827520       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-k9m92" node="ha-286000-m02"
	
	
	==> kubelet <==
	Aug 16 17:13:25 ha-286000 kubelet[2114]: E0816 17:13:25.672677    2114 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 16 17:13:25 ha-286000 kubelet[2114]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 16 17:13:25 ha-286000 kubelet[2114]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 16 17:13:25 ha-286000 kubelet[2114]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 16 17:13:25 ha-286000 kubelet[2114]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 16 17:14:25 ha-286000 kubelet[2114]: E0816 17:14:25.670434    2114 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 16 17:14:25 ha-286000 kubelet[2114]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 16 17:14:25 ha-286000 kubelet[2114]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 16 17:14:25 ha-286000 kubelet[2114]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 16 17:14:25 ha-286000 kubelet[2114]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 16 17:15:25 ha-286000 kubelet[2114]: E0816 17:15:25.670848    2114 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 16 17:15:25 ha-286000 kubelet[2114]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 16 17:15:25 ha-286000 kubelet[2114]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 16 17:15:25 ha-286000 kubelet[2114]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 16 17:15:25 ha-286000 kubelet[2114]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 16 17:16:25 ha-286000 kubelet[2114]: E0816 17:16:25.672451    2114 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 16 17:16:25 ha-286000 kubelet[2114]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 16 17:16:25 ha-286000 kubelet[2114]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 16 17:16:25 ha-286000 kubelet[2114]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 16 17:16:25 ha-286000 kubelet[2114]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 16 17:17:25 ha-286000 kubelet[2114]: E0816 17:17:25.670075    2114 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 16 17:17:25 ha-286000 kubelet[2114]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 16 17:17:25 ha-286000 kubelet[2114]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 16 17:17:25 ha-286000 kubelet[2114]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 16 17:17:25 ha-286000 kubelet[2114]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-286000 -n ha-286000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-286000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/AddWorkerNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (51.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (3.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-amd64 -p ha-286000 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-286000 status --output json -v=7 --alsologtostderr: exit status 2 (441.39183ms)

                                                
                                                
-- stdout --
	[{"Name":"ha-286000","Host":"Running","Kubelet":"Running","APIServer":"Running","Kubeconfig":"Configured","Worker":false},{"Name":"ha-286000-m02","Host":"Running","Kubelet":"Running","APIServer":"Running","Kubeconfig":"Configured","Worker":false},{"Name":"ha-286000-m03","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false},{"Name":"ha-286000-m04","Host":"Running","Kubelet":"Running","APIServer":"Irrelevant","Kubeconfig":"Irrelevant","Worker":true}]

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 10:17:52.635870    4300 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:17:52.636061    4300 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:17:52.636067    4300 out.go:358] Setting ErrFile to fd 2...
	I0816 10:17:52.636071    4300 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:17:52.636238    4300 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1276/.minikube/bin
	I0816 10:17:52.636427    4300 out.go:352] Setting JSON to true
	I0816 10:17:52.636451    4300 mustload.go:65] Loading cluster: ha-286000
	I0816 10:17:52.636493    4300 notify.go:220] Checking for updates...
	I0816 10:17:52.636761    4300 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:17:52.636794    4300 status.go:255] checking status of ha-286000 ...
	I0816 10:17:52.637190    4300 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:17:52.637241    4300 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:17:52.646468    4300 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51557
	I0816 10:17:52.646815    4300 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:17:52.647235    4300 main.go:141] libmachine: Using API Version  1
	I0816 10:17:52.647247    4300 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:17:52.647506    4300 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:17:52.647643    4300 main.go:141] libmachine: (ha-286000) Calling .GetState
	I0816 10:17:52.647742    4300 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:17:52.647809    4300 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:17:52.648791    4300 status.go:330] ha-286000 host status = "Running" (err=<nil>)
	I0816 10:17:52.648811    4300 host.go:66] Checking if "ha-286000" exists ...
	I0816 10:17:52.649075    4300 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:17:52.649113    4300 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:17:52.657539    4300 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51559
	I0816 10:17:52.657864    4300 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:17:52.658207    4300 main.go:141] libmachine: Using API Version  1
	I0816 10:17:52.658222    4300 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:17:52.658473    4300 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:17:52.658595    4300 main.go:141] libmachine: (ha-286000) Calling .GetIP
	I0816 10:17:52.658707    4300 host.go:66] Checking if "ha-286000" exists ...
	I0816 10:17:52.658972    4300 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:17:52.659001    4300 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:17:52.672387    4300 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51561
	I0816 10:17:52.672743    4300 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:17:52.673070    4300 main.go:141] libmachine: Using API Version  1
	I0816 10:17:52.673082    4300 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:17:52.673314    4300 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:17:52.673422    4300 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:17:52.673548    4300 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 10:17:52.673567    4300 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:17:52.673643    4300 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:17:52.673724    4300 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:17:52.673805    4300 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:17:52.673882    4300 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:17:52.707374    4300 ssh_runner.go:195] Run: systemctl --version
	I0816 10:17:52.712093    4300 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 10:17:52.723003    4300 kubeconfig.go:125] found "ha-286000" server: "https://192.169.0.254:8443"
	I0816 10:17:52.723031    4300 api_server.go:166] Checking apiserver status ...
	I0816 10:17:52.723067    4300 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 10:17:52.734709    4300 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1897/cgroup
	W0816 10:17:52.742067    4300 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1897/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0816 10:17:52.742119    4300 ssh_runner.go:195] Run: ls
	I0816 10:17:52.745528    4300 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0816 10:17:52.748675    4300 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0816 10:17:52.748687    4300 status.go:422] ha-286000 apiserver status = Running (err=<nil>)
	I0816 10:17:52.748697    4300 status.go:257] ha-286000 status: &{Name:ha-286000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 10:17:52.748708    4300 status.go:255] checking status of ha-286000-m02 ...
	I0816 10:17:52.748957    4300 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:17:52.748992    4300 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:17:52.757856    4300 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51565
	I0816 10:17:52.758220    4300 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:17:52.758609    4300 main.go:141] libmachine: Using API Version  1
	I0816 10:17:52.758624    4300 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:17:52.758853    4300 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:17:52.758967    4300 main.go:141] libmachine: (ha-286000-m02) Calling .GetState
	I0816 10:17:52.759056    4300 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:17:52.759140    4300 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid from json: 3806
	I0816 10:17:52.760147    4300 status.go:330] ha-286000-m02 host status = "Running" (err=<nil>)
	I0816 10:17:52.760158    4300 host.go:66] Checking if "ha-286000-m02" exists ...
	I0816 10:17:52.760470    4300 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:17:52.760496    4300 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:17:52.769470    4300 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51567
	I0816 10:17:52.769804    4300 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:17:52.770161    4300 main.go:141] libmachine: Using API Version  1
	I0816 10:17:52.770177    4300 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:17:52.770373    4300 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:17:52.770495    4300 main.go:141] libmachine: (ha-286000-m02) Calling .GetIP
	I0816 10:17:52.770577    4300 host.go:66] Checking if "ha-286000-m02" exists ...
	I0816 10:17:52.770833    4300 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:17:52.770857    4300 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:17:52.779404    4300 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51569
	I0816 10:17:52.779753    4300 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:17:52.780087    4300 main.go:141] libmachine: Using API Version  1
	I0816 10:17:52.780097    4300 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:17:52.780312    4300 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:17:52.780432    4300 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:17:52.780581    4300 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 10:17:52.780593    4300 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:17:52.780673    4300 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:17:52.780759    4300 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:17:52.780835    4300 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:17:52.780920    4300 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/id_rsa Username:docker}
	I0816 10:17:52.816086    4300 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 10:17:52.828223    4300 kubeconfig.go:125] found "ha-286000" server: "https://192.169.0.254:8443"
	I0816 10:17:52.828238    4300 api_server.go:166] Checking apiserver status ...
	I0816 10:17:52.828282    4300 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 10:17:52.840348    4300 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1932/cgroup
	W0816 10:17:52.848963    4300 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1932/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0816 10:17:52.849012    4300 ssh_runner.go:195] Run: ls
	I0816 10:17:52.852179    4300 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0816 10:17:52.855313    4300 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0816 10:17:52.855324    4300 status.go:422] ha-286000-m02 apiserver status = Running (err=<nil>)
	I0816 10:17:52.855332    4300 status.go:257] ha-286000-m02 status: &{Name:ha-286000-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 10:17:52.855342    4300 status.go:255] checking status of ha-286000-m03 ...
	I0816 10:17:52.855608    4300 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:17:52.855638    4300 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:17:52.864385    4300 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51573
	I0816 10:17:52.864726    4300 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:17:52.865044    4300 main.go:141] libmachine: Using API Version  1
	I0816 10:17:52.865070    4300 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:17:52.865270    4300 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:17:52.865373    4300 main.go:141] libmachine: (ha-286000-m03) Calling .GetState
	I0816 10:17:52.865445    4300 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:17:52.865524    4300 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid from json: 3849
	I0816 10:17:52.866561    4300 status.go:330] ha-286000-m03 host status = "Running" (err=<nil>)
	I0816 10:17:52.866569    4300 host.go:66] Checking if "ha-286000-m03" exists ...
	I0816 10:17:52.866813    4300 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:17:52.866837    4300 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:17:52.875635    4300 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51575
	I0816 10:17:52.875969    4300 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:17:52.876298    4300 main.go:141] libmachine: Using API Version  1
	I0816 10:17:52.876315    4300 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:17:52.876552    4300 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:17:52.876675    4300 main.go:141] libmachine: (ha-286000-m03) Calling .GetIP
	I0816 10:17:52.876751    4300 host.go:66] Checking if "ha-286000-m03" exists ...
	I0816 10:17:52.877033    4300 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:17:52.877065    4300 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:17:52.885653    4300 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51577
	I0816 10:17:52.885992    4300 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:17:52.886319    4300 main.go:141] libmachine: Using API Version  1
	I0816 10:17:52.886330    4300 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:17:52.886538    4300 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:17:52.886676    4300 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:17:52.886818    4300 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 10:17:52.886829    4300 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:17:52.886928    4300 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:17:52.887011    4300 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:17:52.887108    4300 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:17:52.887194    4300 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/id_rsa Username:docker}
	I0816 10:17:52.923938    4300 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 10:17:52.935571    4300 kubeconfig.go:125] found "ha-286000" server: "https://192.169.0.254:8443"
	I0816 10:17:52.935584    4300 api_server.go:166] Checking apiserver status ...
	I0816 10:17:52.935615    4300 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 10:17:52.946384    4300 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 10:17:52.946399    4300 status.go:422] ha-286000-m03 apiserver status = Stopped (err=<nil>)
	I0816 10:17:52.946410    4300 status.go:257] ha-286000-m03 status: &{Name:ha-286000-m03 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 10:17:52.946420    4300 status.go:255] checking status of ha-286000-m04 ...
	I0816 10:17:52.946685    4300 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:17:52.946714    4300 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:17:52.955387    4300 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51580
	I0816 10:17:52.955718    4300 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:17:52.956040    4300 main.go:141] libmachine: Using API Version  1
	I0816 10:17:52.956050    4300 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:17:52.956272    4300 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:17:52.956380    4300 main.go:141] libmachine: (ha-286000-m04) Calling .GetState
	I0816 10:17:52.956469    4300 main.go:141] libmachine: (ha-286000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:17:52.956540    4300 main.go:141] libmachine: (ha-286000-m04) DBG | hyperkit pid from json: 4248
	I0816 10:17:52.957568    4300 status.go:330] ha-286000-m04 host status = "Running" (err=<nil>)
	I0816 10:17:52.957577    4300 host.go:66] Checking if "ha-286000-m04" exists ...
	I0816 10:17:52.957834    4300 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:17:52.957857    4300 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:17:52.966581    4300 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51582
	I0816 10:17:52.966955    4300 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:17:52.967268    4300 main.go:141] libmachine: Using API Version  1
	I0816 10:17:52.967279    4300 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:17:52.967511    4300 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:17:52.967619    4300 main.go:141] libmachine: (ha-286000-m04) Calling .GetIP
	I0816 10:17:52.967730    4300 host.go:66] Checking if "ha-286000-m04" exists ...
	I0816 10:17:52.967987    4300 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:17:52.968010    4300 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:17:52.976587    4300 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51584
	I0816 10:17:52.976976    4300 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:17:52.977285    4300 main.go:141] libmachine: Using API Version  1
	I0816 10:17:52.977295    4300 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:17:52.977482    4300 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:17:52.977584    4300 main.go:141] libmachine: (ha-286000-m04) Calling .DriverName
	I0816 10:17:52.977719    4300 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 10:17:52.977736    4300 main.go:141] libmachine: (ha-286000-m04) Calling .GetSSHHostname
	I0816 10:17:52.977816    4300 main.go:141] libmachine: (ha-286000-m04) Calling .GetSSHPort
	I0816 10:17:52.977900    4300 main.go:141] libmachine: (ha-286000-m04) Calling .GetSSHKeyPath
	I0816 10:17:52.978007    4300 main.go:141] libmachine: (ha-286000-m04) Calling .GetSSHUsername
	I0816 10:17:52.978080    4300 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m04/id_rsa Username:docker}
	I0816 10:17:53.010862    4300 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 10:17:53.021166    4300 status.go:257] ha-286000-m04 status: &{Name:ha-286000-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:328: failed to run minikube status. args "out/minikube-darwin-amd64 -p ha-286000 status --output json -v=7 --alsologtostderr" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-286000 -n ha-286000
helpers_test.go:244: <<< TestMultiControlPlane/serial/CopyFile FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/CopyFile]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-286000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-286000 logs -n 25: (2.154162869s)
helpers_test.go:252: TestMultiControlPlane/serial/CopyFile logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| kubectl | -p ha-286000 -- get pods -o          | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:15 PDT | 16 Aug 24 10:15 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- get pods -o          | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:15 PDT | 16 Aug 24 10:15 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- get pods -o          | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:15 PDT | 16 Aug 24 10:15 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- get pods -o          | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:15 PDT | 16 Aug 24 10:15 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- get pods -o          | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:15 PDT | 16 Aug 24 10:15 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- get pods -o          | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:15 PDT | 16 Aug 24 10:15 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- get pods -o          | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- get pods -o          | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- get pods -o          | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT |                     |
	|         | busybox-7dff88458-99xmp --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-dvmvk --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-k9m92 --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT |                     |
	|         | busybox-7dff88458-99xmp --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-dvmvk --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-k9m92 --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT |                     |
	|         | busybox-7dff88458-99xmp -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-dvmvk -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-k9m92 -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- get pods -o          | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT |                     |
	|         | busybox-7dff88458-99xmp              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-dvmvk              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-dvmvk -- sh        |           |         |         |                     |                     |
	|         | -c ping -c 1 192.169.0.1             |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-k9m92              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-k9m92 -- sh        |           |         |         |                     |                     |
	|         | -c ping -c 1 192.169.0.1             |           |         |         |                     |                     |
	| node    | add -p ha-286000 -v=7                | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:17 PDT | 16 Aug 24 10:17 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 10:01:48
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 10:01:48.113296    3758 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:01:48.113462    3758 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:01:48.113467    3758 out.go:358] Setting ErrFile to fd 2...
	I0816 10:01:48.113470    3758 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:01:48.113648    3758 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1276/.minikube/bin
	I0816 10:01:48.115192    3758 out.go:352] Setting JSON to false
	I0816 10:01:48.140204    3758 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":1878,"bootTime":1723825830,"procs":433,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0816 10:01:48.140316    3758 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 10:01:48.194498    3758 out.go:177] * [ha-286000] minikube v1.33.1 on Darwin 14.6.1
	I0816 10:01:48.236650    3758 notify.go:220] Checking for updates...
	I0816 10:01:48.262360    3758 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 10:01:48.320415    3758 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1276/kubeconfig
	I0816 10:01:48.381368    3758 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0816 10:01:48.452505    3758 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 10:01:48.474398    3758 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1276/.minikube
	I0816 10:01:48.495459    3758 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 10:01:48.520120    3758 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 10:01:48.550379    3758 out.go:177] * Using the hyperkit driver based on user configuration
	I0816 10:01:48.592331    3758 start.go:297] selected driver: hyperkit
	I0816 10:01:48.592358    3758 start.go:901] validating driver "hyperkit" against <nil>
	I0816 10:01:48.592382    3758 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 10:01:48.596757    3758 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:01:48.596907    3758 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19461-1276/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0816 10:01:48.605545    3758 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0816 10:01:48.609748    3758 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:01:48.609772    3758 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0816 10:01:48.609805    3758 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 10:01:48.610046    3758 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 10:01:48.610094    3758 cni.go:84] Creating CNI manager for ""
	I0816 10:01:48.610103    3758 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0816 10:01:48.610108    3758 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0816 10:01:48.610236    3758 start.go:340] cluster config:
	{Name:ha-286000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-286000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docke
r CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0816 10:01:48.610323    3758 iso.go:125] acquiring lock: {Name:mkb430b43b5e7a8a683454482519837ed996c78d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:01:48.632334    3758 out.go:177] * Starting "ha-286000" primary control-plane node in "ha-286000" cluster
	I0816 10:01:48.653456    3758 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 10:01:48.653537    3758 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0816 10:01:48.653563    3758 cache.go:56] Caching tarball of preloaded images
	I0816 10:01:48.653777    3758 preload.go:172] Found /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0816 10:01:48.653813    3758 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 10:01:48.654332    3758 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:01:48.654370    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json: {Name:mk79fdae2e45cdfc987caaf16c5f211150e90dc5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:01:48.654995    3758 start.go:360] acquireMachinesLock for ha-286000: {Name:mk0510f0432b1e24fd07d899155e85dff8756d5a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:01:48.655106    3758 start.go:364] duration metric: took 87.458µs to acquireMachinesLock for "ha-286000"
	I0816 10:01:48.655154    3758 start.go:93] Provisioning new machine with config: &{Name:ha-286000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:ha-286000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:01:48.655246    3758 start.go:125] createHost starting for "" (driver="hyperkit")
	I0816 10:01:48.677450    3758 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0816 10:01:48.677701    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:01:48.677786    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:01:48.687593    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51004
	I0816 10:01:48.687935    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:01:48.688347    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:01:48.688357    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:01:48.688562    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:01:48.688668    3758 main.go:141] libmachine: (ha-286000) Calling .GetMachineName
	I0816 10:01:48.688765    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:01:48.688895    3758 start.go:159] libmachine.API.Create for "ha-286000" (driver="hyperkit")
	I0816 10:01:48.688916    3758 client.go:168] LocalClient.Create starting
	I0816 10:01:48.688948    3758 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem
	I0816 10:01:48.688999    3758 main.go:141] libmachine: Decoding PEM data...
	I0816 10:01:48.689012    3758 main.go:141] libmachine: Parsing certificate...
	I0816 10:01:48.689063    3758 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem
	I0816 10:01:48.689100    3758 main.go:141] libmachine: Decoding PEM data...
	I0816 10:01:48.689111    3758 main.go:141] libmachine: Parsing certificate...
	I0816 10:01:48.689128    3758 main.go:141] libmachine: Running pre-create checks...
	I0816 10:01:48.689135    3758 main.go:141] libmachine: (ha-286000) Calling .PreCreateCheck
	I0816 10:01:48.689224    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:48.689427    3758 main.go:141] libmachine: (ha-286000) Calling .GetConfigRaw
	I0816 10:01:48.698771    3758 main.go:141] libmachine: Creating machine...
	I0816 10:01:48.698794    3758 main.go:141] libmachine: (ha-286000) Calling .Create
	I0816 10:01:48.699018    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:48.699296    3758 main.go:141] libmachine: (ha-286000) DBG | I0816 10:01:48.699015    3766 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19461-1276/.minikube
	I0816 10:01:48.699450    3758 main.go:141] libmachine: (ha-286000) Downloading /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19461-1276/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0816 10:01:48.884950    3758 main.go:141] libmachine: (ha-286000) DBG | I0816 10:01:48.884833    3766 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa...
	I0816 10:01:48.986348    3758 main.go:141] libmachine: (ha-286000) DBG | I0816 10:01:48.986275    3766 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/ha-286000.rawdisk...
	I0816 10:01:48.986388    3758 main.go:141] libmachine: (ha-286000) DBG | Writing magic tar header
	I0816 10:01:48.986396    3758 main.go:141] libmachine: (ha-286000) DBG | Writing SSH key tar header
	I0816 10:01:48.987038    3758 main.go:141] libmachine: (ha-286000) DBG | I0816 10:01:48.987004    3766 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000 ...
	I0816 10:01:49.360700    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:49.360726    3758 main.go:141] libmachine: (ha-286000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/hyperkit.pid
	I0816 10:01:49.360741    3758 main.go:141] libmachine: (ha-286000) DBG | Using UUID ad96de67-e238-408c-89eb-d74e5b68d297
	I0816 10:01:49.471852    3758 main.go:141] libmachine: (ha-286000) DBG | Generated MAC 66:c8:48:4e:12:1b
	I0816 10:01:49.471873    3758 main.go:141] libmachine: (ha-286000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000
	I0816 10:01:49.471908    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"ad96de67-e238-408c-89eb-d74e5b68d297", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0000b2330)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 10:01:49.471951    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"ad96de67-e238-408c-89eb-d74e5b68d297", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0000b2330)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 10:01:49.471996    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "ad96de67-e238-408c-89eb-d74e5b68d297", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/ha-286000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/bzimage,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000"}
	I0816 10:01:49.472043    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U ad96de67-e238-408c-89eb-d74e5b68d297 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/ha-286000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/console-ring -f kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/bzimage,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000"
	I0816 10:01:49.472063    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0816 10:01:49.475179    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 DEBUG: hyperkit: Pid is 3771
	I0816 10:01:49.475583    3758 main.go:141] libmachine: (ha-286000) DBG | Attempt 0
	I0816 10:01:49.475597    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:49.475674    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:01:49.476510    3758 main.go:141] libmachine: (ha-286000) DBG | Searching for 66:c8:48:4e:12:1b in /var/db/dhcpd_leases ...
	I0816 10:01:49.476583    3758 main.go:141] libmachine: (ha-286000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0816 10:01:49.476592    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:01:49.476602    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:01:49.476612    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:01:49.482826    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0816 10:01:49.534430    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0816 10:01:49.535040    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:01:49.535056    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:01:49.535064    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:01:49.535072    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:01:49.909510    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0816 10:01:49.909527    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0816 10:01:50.024439    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:01:50.024459    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:01:50.024479    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:01:50.024494    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:01:50.025363    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:50 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0816 10:01:50.025375    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:50 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0816 10:01:51.477573    3758 main.go:141] libmachine: (ha-286000) DBG | Attempt 1
	I0816 10:01:51.477587    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:51.477660    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:01:51.478481    3758 main.go:141] libmachine: (ha-286000) DBG | Searching for 66:c8:48:4e:12:1b in /var/db/dhcpd_leases ...
	I0816 10:01:51.478504    3758 main.go:141] libmachine: (ha-286000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0816 10:01:51.478511    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:01:51.478520    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:01:51.478531    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:01:53.479582    3758 main.go:141] libmachine: (ha-286000) DBG | Attempt 2
	I0816 10:01:53.479599    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:53.479760    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:01:53.480630    3758 main.go:141] libmachine: (ha-286000) DBG | Searching for 66:c8:48:4e:12:1b in /var/db/dhcpd_leases ...
	I0816 10:01:53.480678    3758 main.go:141] libmachine: (ha-286000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0816 10:01:53.480688    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:01:53.480697    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:01:53.480710    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:01:55.482614    3758 main.go:141] libmachine: (ha-286000) DBG | Attempt 3
	I0816 10:01:55.482630    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:55.482703    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:01:55.483616    3758 main.go:141] libmachine: (ha-286000) DBG | Searching for 66:c8:48:4e:12:1b in /var/db/dhcpd_leases ...
	I0816 10:01:55.483662    3758 main.go:141] libmachine: (ha-286000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0816 10:01:55.483672    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:01:55.483679    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:01:55.483687    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:01:55.581983    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:55 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0816 10:01:55.582063    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:55 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0816 10:01:55.582075    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:55 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0816 10:01:55.606414    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:55 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0816 10:01:57.484306    3758 main.go:141] libmachine: (ha-286000) DBG | Attempt 4
	I0816 10:01:57.484322    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:57.484414    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:01:57.485196    3758 main.go:141] libmachine: (ha-286000) DBG | Searching for 66:c8:48:4e:12:1b in /var/db/dhcpd_leases ...
	I0816 10:01:57.485261    3758 main.go:141] libmachine: (ha-286000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0816 10:01:57.485273    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:01:57.485286    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:01:57.485294    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:01:59.485978    3758 main.go:141] libmachine: (ha-286000) DBG | Attempt 5
	I0816 10:01:59.486008    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:59.486192    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:01:59.487610    3758 main.go:141] libmachine: (ha-286000) DBG | Searching for 66:c8:48:4e:12:1b in /var/db/dhcpd_leases ...
	I0816 10:01:59.487709    3758 main.go:141] libmachine: (ha-286000) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0816 10:01:59.487730    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:01:59.487784    3758 main.go:141] libmachine: (ha-286000) DBG | Found match: 66:c8:48:4e:12:1b
	I0816 10:01:59.487797    3758 main.go:141] libmachine: (ha-286000) DBG | IP: 192.169.0.5
	I0816 10:01:59.487831    3758 main.go:141] libmachine: (ha-286000) Calling .GetConfigRaw
	I0816 10:01:59.488860    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:01:59.489069    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:01:59.489276    3758 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0816 10:01:59.489289    3758 main.go:141] libmachine: (ha-286000) Calling .GetState
	I0816 10:01:59.489463    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:59.489567    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:01:59.490615    3758 main.go:141] libmachine: Detecting operating system of created instance...
	I0816 10:01:59.490628    3758 main.go:141] libmachine: Waiting for SSH to be available...
	I0816 10:01:59.490644    3758 main.go:141] libmachine: Getting to WaitForSSH function...
	I0816 10:01:59.490652    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:01:59.490782    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:01:59.490923    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:01:59.491055    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:01:59.491190    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:01:59.491362    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:01:59.491595    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:01:59.491605    3758 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0816 10:01:59.511220    3758 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0816 10:02:02.573095    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 10:02:02.573108    3758 main.go:141] libmachine: Detecting the provisioner...
	I0816 10:02:02.573113    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:02.573255    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:02.573385    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:02.573489    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:02.573602    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:02.573753    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:02.573906    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:02:02.573914    3758 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0816 10:02:02.632510    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0816 10:02:02.632561    3758 main.go:141] libmachine: found compatible host: buildroot
	I0816 10:02:02.632568    3758 main.go:141] libmachine: Provisioning with buildroot...
	I0816 10:02:02.632573    3758 main.go:141] libmachine: (ha-286000) Calling .GetMachineName
	I0816 10:02:02.632721    3758 buildroot.go:166] provisioning hostname "ha-286000"
	I0816 10:02:02.632732    3758 main.go:141] libmachine: (ha-286000) Calling .GetMachineName
	I0816 10:02:02.632834    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:02.632956    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:02.633046    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:02.633122    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:02.633236    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:02.633363    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:02.633492    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:02:02.633500    3758 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-286000 && echo "ha-286000" | sudo tee /etc/hostname
	I0816 10:02:02.702336    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-286000
	
	I0816 10:02:02.702354    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:02.702482    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:02.702569    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:02.702670    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:02.702756    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:02.702893    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:02.703040    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:02:02.703052    3758 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-286000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-286000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-286000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 10:02:02.766997    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 10:02:02.767016    3758 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19461-1276/.minikube CaCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19461-1276/.minikube}
	I0816 10:02:02.767026    3758 buildroot.go:174] setting up certificates
	I0816 10:02:02.767038    3758 provision.go:84] configureAuth start
	I0816 10:02:02.767044    3758 main.go:141] libmachine: (ha-286000) Calling .GetMachineName
	I0816 10:02:02.767175    3758 main.go:141] libmachine: (ha-286000) Calling .GetIP
	I0816 10:02:02.767270    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:02.767372    3758 provision.go:143] copyHostCerts
	I0816 10:02:02.767406    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem
	I0816 10:02:02.767476    3758 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem, removing ...
	I0816 10:02:02.767485    3758 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem
	I0816 10:02:02.767630    3758 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem (1679 bytes)
	I0816 10:02:02.767837    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem
	I0816 10:02:02.767868    3758 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem, removing ...
	I0816 10:02:02.767873    3758 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem
	I0816 10:02:02.767991    3758 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem (1078 bytes)
	I0816 10:02:02.768146    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem
	I0816 10:02:02.768179    3758 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem, removing ...
	I0816 10:02:02.768184    3758 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem
	I0816 10:02:02.768268    3758 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem (1123 bytes)
	I0816 10:02:02.768410    3758 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem org=jenkins.ha-286000 san=[127.0.0.1 192.169.0.5 ha-286000 localhost minikube]
	I0816 10:02:02.915225    3758 provision.go:177] copyRemoteCerts
	I0816 10:02:02.915279    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 10:02:02.915299    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:02.915444    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:02.915558    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:02.915640    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:02.915723    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:02:02.953069    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0816 10:02:02.953145    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0816 10:02:02.973516    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0816 10:02:02.973600    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0816 10:02:02.993038    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0816 10:02:02.993100    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 10:02:03.013565    3758 provision.go:87] duration metric: took 246.514857ms to configureAuth
	I0816 10:02:03.013581    3758 buildroot.go:189] setting minikube options for container-runtime
	I0816 10:02:03.013724    3758 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:02:03.013737    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:03.013889    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:03.013978    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:03.014067    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:03.014147    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:03.014250    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:03.014369    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:03.014498    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:02:03.014506    3758 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0816 10:02:03.073645    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0816 10:02:03.073660    3758 buildroot.go:70] root file system type: tmpfs
	I0816 10:02:03.073734    3758 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0816 10:02:03.073745    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:03.073872    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:03.073966    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:03.074063    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:03.074164    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:03.074292    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:03.074429    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:02:03.074479    3758 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0816 10:02:03.148891    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0816 10:02:03.148912    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:03.149052    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:03.149138    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:03.149235    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:03.149324    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:03.149466    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:03.149604    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:02:03.149616    3758 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0816 10:02:04.780498    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0816 10:02:04.780519    3758 main.go:141] libmachine: Checking connection to Docker...
	I0816 10:02:04.780526    3758 main.go:141] libmachine: (ha-286000) Calling .GetURL
	I0816 10:02:04.780691    3758 main.go:141] libmachine: Docker is up and running!
	I0816 10:02:04.780698    3758 main.go:141] libmachine: Reticulating splines...
	I0816 10:02:04.780702    3758 client.go:171] duration metric: took 16.091876944s to LocalClient.Create
	I0816 10:02:04.780713    3758 start.go:167] duration metric: took 16.091916939s to libmachine.API.Create "ha-286000"
	I0816 10:02:04.780722    3758 start.go:293] postStartSetup for "ha-286000" (driver="hyperkit")
	I0816 10:02:04.780729    3758 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 10:02:04.780738    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:04.780873    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 10:02:04.780884    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:04.780956    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:04.781047    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:04.781154    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:04.781275    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:02:04.819650    3758 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 10:02:04.823314    3758 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 10:02:04.823335    3758 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19461-1276/.minikube/addons for local assets ...
	I0816 10:02:04.823443    3758 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19461-1276/.minikube/files for local assets ...
	I0816 10:02:04.823624    3758 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> 18312.pem in /etc/ssl/certs
	I0816 10:02:04.823631    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> /etc/ssl/certs/18312.pem
	I0816 10:02:04.823837    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 10:02:04.831860    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem --> /etc/ssl/certs/18312.pem (1708 bytes)
	I0816 10:02:04.850901    3758 start.go:296] duration metric: took 70.172878ms for postStartSetup
	I0816 10:02:04.850935    3758 main.go:141] libmachine: (ha-286000) Calling .GetConfigRaw
	I0816 10:02:04.851561    3758 main.go:141] libmachine: (ha-286000) Calling .GetIP
	I0816 10:02:04.851702    3758 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:02:04.852053    3758 start.go:128] duration metric: took 16.196888252s to createHost
	I0816 10:02:04.852071    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:04.852167    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:04.852261    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:04.852344    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:04.852420    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:04.852525    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:04.852648    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:02:04.852655    3758 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 10:02:04.911299    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723827724.986438448
	
	I0816 10:02:04.911317    3758 fix.go:216] guest clock: 1723827724.986438448
	I0816 10:02:04.911322    3758 fix.go:229] Guest: 2024-08-16 10:02:04.986438448 -0700 PDT Remote: 2024-08-16 10:02:04.852061 -0700 PDT m=+16.776125584 (delta=134.377448ms)
	I0816 10:02:04.911341    3758 fix.go:200] guest clock delta is within tolerance: 134.377448ms
	I0816 10:02:04.911345    3758 start.go:83] releasing machines lock for "ha-286000", held for 16.256325696s
	I0816 10:02:04.911364    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:04.911498    3758 main.go:141] libmachine: (ha-286000) Calling .GetIP
	I0816 10:02:04.911592    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:04.911942    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:04.912068    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:04.912144    3758 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 10:02:04.912171    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:04.912218    3758 ssh_runner.go:195] Run: cat /version.json
	I0816 10:02:04.912231    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:04.912262    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:04.912305    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:04.912360    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:04.912384    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:04.912437    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:04.912464    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:04.912547    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:02:04.912567    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:02:04.945688    3758 ssh_runner.go:195] Run: systemctl --version
	I0816 10:02:04.997158    3758 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 10:02:05.002278    3758 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 10:02:05.002315    3758 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 10:02:05.015089    3758 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 10:02:05.015098    3758 start.go:495] detecting cgroup driver to use...
	I0816 10:02:05.015195    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 10:02:05.030338    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0816 10:02:05.038588    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0816 10:02:05.046898    3758 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0816 10:02:05.046932    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0816 10:02:05.055211    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 10:02:05.063533    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0816 10:02:05.073244    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 10:02:05.081895    3758 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 10:02:05.090915    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0816 10:02:05.099773    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0816 10:02:05.108719    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0816 10:02:05.117772    3758 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 10:02:05.125574    3758 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 10:02:05.145403    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:05.261568    3758 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0816 10:02:05.278920    3758 start.go:495] detecting cgroup driver to use...
	I0816 10:02:05.278996    3758 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0816 10:02:05.293925    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 10:02:05.306704    3758 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 10:02:05.320000    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 10:02:05.330230    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 10:02:05.340255    3758 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0816 10:02:05.369098    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 10:02:05.379374    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 10:02:05.395120    3758 ssh_runner.go:195] Run: which cri-dockerd
	I0816 10:02:05.397921    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0816 10:02:05.404866    3758 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0816 10:02:05.417935    3758 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0816 10:02:05.514793    3758 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0816 10:02:05.626208    3758 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0816 10:02:05.626292    3758 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0816 10:02:05.640317    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:05.737978    3758 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0816 10:02:08.105430    3758 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.367458496s)
	I0816 10:02:08.105491    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0816 10:02:08.115970    3758 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0816 10:02:08.129325    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0816 10:02:08.140312    3758 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0816 10:02:08.237628    3758 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0816 10:02:08.340285    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:08.436168    3758 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0816 10:02:08.457808    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0816 10:02:08.468314    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:08.561830    3758 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0816 10:02:08.620080    3758 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0816 10:02:08.620164    3758 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0816 10:02:08.624716    3758 start.go:563] Will wait 60s for crictl version
	I0816 10:02:08.624764    3758 ssh_runner.go:195] Run: which crictl
	I0816 10:02:08.627806    3758 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 10:02:08.660437    3758 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0816 10:02:08.660505    3758 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0816 10:02:08.678073    3758 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0816 10:02:08.722165    3758 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0816 10:02:08.722217    3758 main.go:141] libmachine: (ha-286000) Calling .GetIP
	I0816 10:02:08.722605    3758 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0816 10:02:08.727140    3758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 10:02:08.736769    3758 kubeadm.go:883] updating cluster {Name:ha-286000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:ha-286000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 10:02:08.736830    3758 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 10:02:08.736887    3758 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0816 10:02:08.748223    3758 docker.go:685] Got preloaded images: 
	I0816 10:02:08.748236    3758 docker.go:691] registry.k8s.io/kube-apiserver:v1.31.0 wasn't preloaded
	I0816 10:02:08.748294    3758 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0816 10:02:08.755601    3758 ssh_runner.go:195] Run: which lz4
	I0816 10:02:08.758510    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0816 10:02:08.758630    3758 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 10:02:08.761594    3758 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 10:02:08.761610    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (342554258 bytes)
	I0816 10:02:09.771496    3758 docker.go:649] duration metric: took 1.012922149s to copy over tarball
	I0816 10:02:09.771559    3758 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 10:02:12.050040    3758 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.27849665s)
	I0816 10:02:12.050054    3758 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 10:02:12.075438    3758 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0816 10:02:12.083331    3758 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0816 10:02:12.097261    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:12.204348    3758 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0816 10:02:14.516272    3758 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.311936705s)
	I0816 10:02:14.516363    3758 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0816 10:02:14.529500    3758 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0816 10:02:14.529519    3758 cache_images.go:84] Images are preloaded, skipping loading
	I0816 10:02:14.529530    3758 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.0 docker true true} ...
	I0816 10:02:14.529607    3758 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-286000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-286000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 10:02:14.529674    3758 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0816 10:02:14.568093    3758 cni.go:84] Creating CNI manager for ""
	I0816 10:02:14.568107    3758 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0816 10:02:14.568120    3758 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 10:02:14.568134    3758 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-286000 NodeName:ha-286000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 10:02:14.568222    3758 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-286000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 10:02:14.568237    3758 kube-vip.go:115] generating kube-vip config ...
	I0816 10:02:14.568286    3758 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0816 10:02:14.580706    3758 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0816 10:02:14.580778    3758 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0816 10:02:14.580832    3758 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 10:02:14.588124    3758 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 10:02:14.588174    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0816 10:02:14.595283    3758 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0816 10:02:14.608721    3758 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 10:02:14.622036    3758 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0816 10:02:14.635525    3758 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1446 bytes)
	I0816 10:02:14.648868    3758 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0816 10:02:14.651748    3758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 10:02:14.661305    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:14.752561    3758 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 10:02:14.768967    3758 certs.go:68] Setting up /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000 for IP: 192.169.0.5
	I0816 10:02:14.768980    3758 certs.go:194] generating shared ca certs ...
	I0816 10:02:14.768991    3758 certs.go:226] acquiring lock for ca certs: {Name:mka549fbc467849834d2267da204028c49d88a3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:14.769183    3758 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key
	I0816 10:02:14.769258    3758 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key
	I0816 10:02:14.769269    3758 certs.go:256] generating profile certs ...
	I0816 10:02:14.769315    3758 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.key
	I0816 10:02:14.769328    3758 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.crt with IP's: []
	I0816 10:02:14.849573    3758 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.crt ...
	I0816 10:02:14.849589    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.crt: {Name:mk9c6a2b5871e8d33f12e0a58268b028ea37ab4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:14.849911    3758 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.key ...
	I0816 10:02:14.849919    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.key: {Name:mk7d8ed264e583d7ced6b16b60c72c11b15a1f22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:14.850137    3758 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.af99fd6a
	I0816 10:02:14.850151    3758 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.af99fd6a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.254]
	I0816 10:02:14.968129    3758 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.af99fd6a ...
	I0816 10:02:14.968143    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.af99fd6a: {Name:mk3b8945a16852dca8274ec1ab3a9f2eade80831 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:14.968464    3758 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.af99fd6a ...
	I0816 10:02:14.968473    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.af99fd6a: {Name:mk90fcce3348a214c4733fe5a1fb806faff7773f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:14.968717    3758 certs.go:381] copying /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.af99fd6a -> /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt
	I0816 10:02:14.968940    3758 certs.go:385] copying /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.af99fd6a -> /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key
	I0816 10:02:14.969171    3758 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key
	I0816 10:02:14.969185    3758 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.crt with IP's: []
	I0816 10:02:15.029329    3758 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.crt ...
	I0816 10:02:15.029338    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.crt: {Name:mk70aaa3903fb58df2124d779dbea07aa95769f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:15.029604    3758 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key ...
	I0816 10:02:15.029611    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key: {Name:mka3a85354927e8da31f9dfc67f92dea3c77ca65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:15.029850    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0816 10:02:15.029876    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0816 10:02:15.029898    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0816 10:02:15.029940    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0816 10:02:15.029958    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0816 10:02:15.029990    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0816 10:02:15.030008    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0816 10:02:15.030025    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0816 10:02:15.030118    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem (1338 bytes)
	W0816 10:02:15.030163    3758 certs.go:480] ignoring /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831_empty.pem, impossibly tiny 0 bytes
	I0816 10:02:15.030171    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 10:02:15.030240    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem (1078 bytes)
	I0816 10:02:15.030268    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem (1123 bytes)
	I0816 10:02:15.030354    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem (1679 bytes)
	I0816 10:02:15.030467    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem (1708 bytes)
	I0816 10:02:15.030499    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:02:15.030525    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem -> /usr/share/ca-certificates/1831.pem
	I0816 10:02:15.030542    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> /usr/share/ca-certificates/18312.pem
	I0816 10:02:15.031030    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 10:02:15.051618    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0816 10:02:15.071318    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 10:02:15.091017    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 10:02:15.110497    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0816 10:02:15.131033    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 10:02:15.150747    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 10:02:15.170186    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 10:02:15.189808    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 10:02:15.209868    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem --> /usr/share/ca-certificates/1831.pem (1338 bytes)
	I0816 10:02:15.229378    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem --> /usr/share/ca-certificates/18312.pem (1708 bytes)
	I0816 10:02:15.248667    3758 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 10:02:15.262035    3758 ssh_runner.go:195] Run: openssl version
	I0816 10:02:15.266135    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 10:02:15.275959    3758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:02:15.279367    3758 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 16:48 /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:02:15.279403    3758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:02:15.283611    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 10:02:15.291872    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1831.pem && ln -fs /usr/share/ca-certificates/1831.pem /etc/ssl/certs/1831.pem"
	I0816 10:02:15.299890    3758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1831.pem
	I0816 10:02:15.303179    3758 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 16:57 /usr/share/ca-certificates/1831.pem
	I0816 10:02:15.303212    3758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1831.pem
	I0816 10:02:15.307423    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1831.pem /etc/ssl/certs/51391683.0"
	I0816 10:02:15.315741    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18312.pem && ln -fs /usr/share/ca-certificates/18312.pem /etc/ssl/certs/18312.pem"
	I0816 10:02:15.324088    3758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18312.pem
	I0816 10:02:15.327441    3758 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 16:57 /usr/share/ca-certificates/18312.pem
	I0816 10:02:15.327479    3758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18312.pem
	I0816 10:02:15.331723    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18312.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 10:02:15.339921    3758 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 10:02:15.343061    3758 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0816 10:02:15.343109    3758 kubeadm.go:392] StartCluster: {Name:ha-286000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:ha-286000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 10:02:15.343194    3758 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0816 10:02:15.356016    3758 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 10:02:15.363405    3758 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 10:02:15.370635    3758 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 10:02:15.377806    3758 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 10:02:15.377819    3758 kubeadm.go:157] found existing configuration files:
	
	I0816 10:02:15.377855    3758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 10:02:15.384868    3758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 10:02:15.384903    3758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 10:02:15.392001    3758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 10:02:15.398935    3758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 10:02:15.398971    3758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 10:02:15.406181    3758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 10:02:15.413108    3758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 10:02:15.413142    3758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 10:02:15.422603    3758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 10:02:15.435692    3758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 10:02:15.435749    3758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 10:02:15.450920    3758 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 10:02:15.521417    3758 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0816 10:02:15.521501    3758 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 10:02:15.590843    3758 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 10:02:15.590923    3758 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 10:02:15.591008    3758 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0816 10:02:15.600874    3758 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 10:02:15.632377    3758 out.go:235]   - Generating certificates and keys ...
	I0816 10:02:15.632427    3758 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 10:02:15.632475    3758 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 10:02:15.791885    3758 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0816 10:02:15.852803    3758 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0816 10:02:15.961982    3758 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0816 10:02:16.146557    3758 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0816 10:02:16.493401    3758 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0816 10:02:16.493556    3758 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-286000 localhost] and IPs [192.169.0.5 127.0.0.1 ::1]
	I0816 10:02:16.704832    3758 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0816 10:02:16.705108    3758 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-286000 localhost] and IPs [192.169.0.5 127.0.0.1 ::1]
	I0816 10:02:16.922818    3758 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0816 10:02:17.082810    3758 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0816 10:02:17.393939    3758 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0816 10:02:17.394070    3758 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 10:02:17.465159    3758 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 10:02:17.731984    3758 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0816 10:02:18.019833    3758 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 10:02:18.164168    3758 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 10:02:18.273756    3758 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 10:02:18.274215    3758 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 10:02:18.275949    3758 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 10:02:18.300490    3758 out.go:235]   - Booting up control plane ...
	I0816 10:02:18.300569    3758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 10:02:18.300638    3758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 10:02:18.300693    3758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 10:02:18.300780    3758 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 10:02:18.300850    3758 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 10:02:18.300879    3758 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 10:02:18.405245    3758 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0816 10:02:18.405330    3758 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0816 10:02:18.909805    3758 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 504.760374ms
	I0816 10:02:18.909928    3758 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0816 10:02:24.844637    3758 kubeadm.go:310] [api-check] The API server is healthy after 5.938882642s
	I0816 10:02:24.854864    3758 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0816 10:02:24.862134    3758 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0816 10:02:24.890940    3758 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0816 10:02:24.891101    3758 kubeadm.go:310] [mark-control-plane] Marking the node ha-286000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0816 10:02:24.901441    3758 kubeadm.go:310] [bootstrap-token] Using token: 73merd.1elxantqs1p5mkiz
	I0816 10:02:24.939545    3758 out.go:235]   - Configuring RBAC rules ...
	I0816 10:02:24.939737    3758 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0816 10:02:24.944670    3758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0816 10:02:24.951393    3758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0816 10:02:24.953383    3758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0816 10:02:24.955899    3758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0816 10:02:24.958387    3758 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0816 10:02:25.251799    3758 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0816 10:02:25.676108    3758 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0816 10:02:26.249233    3758 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0816 10:02:26.249936    3758 kubeadm.go:310] 
	I0816 10:02:26.250001    3758 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0816 10:02:26.250014    3758 kubeadm.go:310] 
	I0816 10:02:26.250090    3758 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0816 10:02:26.250102    3758 kubeadm.go:310] 
	I0816 10:02:26.250122    3758 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0816 10:02:26.250175    3758 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0816 10:02:26.250221    3758 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0816 10:02:26.250229    3758 kubeadm.go:310] 
	I0816 10:02:26.250268    3758 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0816 10:02:26.250272    3758 kubeadm.go:310] 
	I0816 10:02:26.250305    3758 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0816 10:02:26.250313    3758 kubeadm.go:310] 
	I0816 10:02:26.250359    3758 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0816 10:02:26.250425    3758 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0816 10:02:26.250484    3758 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0816 10:02:26.250491    3758 kubeadm.go:310] 
	I0816 10:02:26.250569    3758 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0816 10:02:26.250648    3758 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0816 10:02:26.250656    3758 kubeadm.go:310] 
	I0816 10:02:26.250716    3758 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 73merd.1elxantqs1p5mkiz \
	I0816 10:02:26.250793    3758 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:995590413ea712c4f7db2cf849d28264ebc291e6cd53d741ce1d22c1daaa1602 \
	I0816 10:02:26.250811    3758 kubeadm.go:310] 	--control-plane 
	I0816 10:02:26.250817    3758 kubeadm.go:310] 
	I0816 10:02:26.250879    3758 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0816 10:02:26.250885    3758 kubeadm.go:310] 
	I0816 10:02:26.250947    3758 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 73merd.1elxantqs1p5mkiz \
	I0816 10:02:26.251036    3758 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:995590413ea712c4f7db2cf849d28264ebc291e6cd53d741ce1d22c1daaa1602 
	I0816 10:02:26.251297    3758 kubeadm.go:310] W0816 17:02:15.599099    1566 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 10:02:26.251521    3758 kubeadm.go:310] W0816 17:02:15.600578    1566 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 10:02:26.251608    3758 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 10:02:26.251620    3758 cni.go:84] Creating CNI manager for ""
	I0816 10:02:26.251624    3758 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0816 10:02:26.289324    3758 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0816 10:02:26.363318    3758 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0816 10:02:26.368971    3758 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0816 10:02:26.368983    3758 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0816 10:02:26.382855    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0816 10:02:26.629869    3758 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 10:02:26.629947    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-286000 minikube.k8s.io/updated_at=2024_08_16T10_02_26_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd minikube.k8s.io/name=ha-286000 minikube.k8s.io/primary=true
	I0816 10:02:26.629949    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 10:02:26.658712    3758 ops.go:34] apiserver oom_adj: -16
	I0816 10:02:26.756402    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 10:02:27.257667    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 10:02:27.757259    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 10:02:28.256864    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 10:02:28.757602    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 10:02:29.256802    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 10:02:29.757282    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 10:02:29.882342    3758 kubeadm.go:1113] duration metric: took 3.252511045s to wait for elevateKubeSystemPrivileges
	I0816 10:02:29.882365    3758 kubeadm.go:394] duration metric: took 14.539506386s to StartCluster
	I0816 10:02:29.882380    3758 settings.go:142] acquiring lock: {Name:mk60e377244eac756871b74b6bd45115654652a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:29.882471    3758 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19461-1276/kubeconfig
	I0816 10:02:29.882922    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/kubeconfig: {Name:mk7f4491c8ec5cf96c96a5a1a5dbfba4301394a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:29.883167    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0816 10:02:29.883170    3758 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:02:29.883182    3758 start.go:241] waiting for startup goroutines ...
	I0816 10:02:29.883204    3758 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 10:02:29.883238    3758 addons.go:69] Setting storage-provisioner=true in profile "ha-286000"
	I0816 10:02:29.883244    3758 addons.go:69] Setting default-storageclass=true in profile "ha-286000"
	I0816 10:02:29.883261    3758 addons.go:234] Setting addon storage-provisioner=true in "ha-286000"
	I0816 10:02:29.883278    3758 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-286000"
	I0816 10:02:29.883280    3758 host.go:66] Checking if "ha-286000" exists ...
	I0816 10:02:29.883305    3758 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:02:29.883558    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:02:29.883573    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:02:29.883575    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:02:29.883598    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:02:29.892969    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51028
	I0816 10:02:29.893238    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51030
	I0816 10:02:29.893356    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:02:29.893605    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:02:29.893730    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:02:29.893741    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:02:29.893938    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:02:29.893951    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:02:29.893976    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:02:29.894142    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:02:29.894254    3758 main.go:141] libmachine: (ha-286000) Calling .GetState
	I0816 10:02:29.894338    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:29.894365    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:02:29.894397    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:02:29.894424    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:02:29.896690    3758 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19461-1276/kubeconfig
	I0816 10:02:29.896968    3758 kapi.go:59] client config for ha-286000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.key", CAFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xa202f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0816 10:02:29.897373    3758 cert_rotation.go:140] Starting client certificate rotation controller
	I0816 10:02:29.897530    3758 addons.go:234] Setting addon default-storageclass=true in "ha-286000"
	I0816 10:02:29.897550    3758 host.go:66] Checking if "ha-286000" exists ...
	I0816 10:02:29.897771    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:02:29.897789    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:02:29.903669    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51032
	I0816 10:02:29.904077    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:02:29.904431    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:02:29.904451    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:02:29.904736    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:02:29.904889    3758 main.go:141] libmachine: (ha-286000) Calling .GetState
	I0816 10:02:29.904993    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:29.905054    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:02:29.906086    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:29.906730    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51034
	I0816 10:02:29.907081    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:02:29.907463    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:02:29.907491    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:02:29.907694    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:02:29.908045    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:02:29.908061    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:02:29.917300    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51036
	I0816 10:02:29.917667    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:02:29.918020    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:02:29.918034    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:02:29.918277    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:02:29.918384    3758 main.go:141] libmachine: (ha-286000) Calling .GetState
	I0816 10:02:29.918483    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:29.918572    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:02:29.919534    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:29.919671    3758 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 10:02:29.919680    3758 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 10:02:29.919688    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:29.919789    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:29.919896    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:29.919990    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:29.920072    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:02:29.928735    3758 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 10:02:29.965711    3758 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 10:02:29.965725    3758 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 10:02:29.965744    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:29.965926    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:29.966043    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:29.966151    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:29.966280    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:02:30.033245    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.169.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0816 10:02:30.037035    3758 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 10:02:30.090040    3758 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 10:02:30.396267    3758 start.go:971] {"host.minikube.internal": 192.169.0.1} host record injected into CoreDNS's ConfigMap
	I0816 10:02:30.396311    3758 main.go:141] libmachine: Making call to close driver server
	I0816 10:02:30.396323    3758 main.go:141] libmachine: (ha-286000) Calling .Close
	I0816 10:02:30.396482    3758 main.go:141] libmachine: Successfully made call to close driver server
	I0816 10:02:30.396488    3758 main.go:141] libmachine: (ha-286000) DBG | Closing plugin on server side
	I0816 10:02:30.396492    3758 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 10:02:30.396501    3758 main.go:141] libmachine: Making call to close driver server
	I0816 10:02:30.396507    3758 main.go:141] libmachine: (ha-286000) Calling .Close
	I0816 10:02:30.396655    3758 main.go:141] libmachine: Successfully made call to close driver server
	I0816 10:02:30.396662    3758 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 10:02:30.396713    3758 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0816 10:02:30.396725    3758 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0816 10:02:30.396804    3758 round_trippers.go:463] GET https://192.169.0.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0816 10:02:30.396810    3758 round_trippers.go:469] Request Headers:
	I0816 10:02:30.396817    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:02:30.396822    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:02:30.402286    3758 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0816 10:02:30.402706    3758 round_trippers.go:463] PUT https://192.169.0.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0816 10:02:30.402713    3758 round_trippers.go:469] Request Headers:
	I0816 10:02:30.402718    3758 round_trippers.go:473]     Content-Type: application/json
	I0816 10:02:30.402721    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:02:30.402723    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:02:30.404290    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:02:30.404403    3758 main.go:141] libmachine: Making call to close driver server
	I0816 10:02:30.404416    3758 main.go:141] libmachine: (ha-286000) Calling .Close
	I0816 10:02:30.404565    3758 main.go:141] libmachine: Successfully made call to close driver server
	I0816 10:02:30.404575    3758 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 10:02:30.404586    3758 main.go:141] libmachine: (ha-286000) DBG | Closing plugin on server side
	I0816 10:02:30.497806    3758 main.go:141] libmachine: Making call to close driver server
	I0816 10:02:30.497818    3758 main.go:141] libmachine: (ha-286000) Calling .Close
	I0816 10:02:30.498006    3758 main.go:141] libmachine: Successfully made call to close driver server
	I0816 10:02:30.498011    3758 main.go:141] libmachine: (ha-286000) DBG | Closing plugin on server side
	I0816 10:02:30.498016    3758 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 10:02:30.498023    3758 main.go:141] libmachine: Making call to close driver server
	I0816 10:02:30.498040    3758 main.go:141] libmachine: (ha-286000) Calling .Close
	I0816 10:02:30.498160    3758 main.go:141] libmachine: Successfully made call to close driver server
	I0816 10:02:30.498168    3758 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 10:02:30.498179    3758 main.go:141] libmachine: (ha-286000) DBG | Closing plugin on server side
	I0816 10:02:30.538607    3758 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0816 10:02:30.596522    3758 addons.go:510] duration metric: took 713.336883ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0816 10:02:30.596578    3758 start.go:246] waiting for cluster config update ...
	I0816 10:02:30.596599    3758 start.go:255] writing updated cluster config ...
	I0816 10:02:30.634683    3758 out.go:201] 
	I0816 10:02:30.655754    3758 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:02:30.655861    3758 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:02:30.677634    3758 out.go:177] * Starting "ha-286000-m02" control-plane node in "ha-286000" cluster
	I0816 10:02:30.737443    3758 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 10:02:30.737475    3758 cache.go:56] Caching tarball of preloaded images
	I0816 10:02:30.737660    3758 preload.go:172] Found /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0816 10:02:30.737679    3758 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 10:02:30.737771    3758 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:02:30.738414    3758 start.go:360] acquireMachinesLock for ha-286000-m02: {Name:mk0510f0432b1e24fd07d899155e85dff8756d5a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:02:30.738494    3758 start.go:364] duration metric: took 63.585µs to acquireMachinesLock for "ha-286000-m02"
	I0816 10:02:30.738517    3758 start.go:93] Provisioning new machine with config: &{Name:ha-286000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:ha-286000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:02:30.738590    3758 start.go:125] createHost starting for "m02" (driver="hyperkit")
	I0816 10:02:30.760743    3758 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0816 10:02:30.760905    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:02:30.760935    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:02:30.771028    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51041
	I0816 10:02:30.771383    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:02:30.771745    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:02:30.771760    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:02:30.771967    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:02:30.772077    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetMachineName
	I0816 10:02:30.772157    3758 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:02:30.772261    3758 start.go:159] libmachine.API.Create for "ha-286000" (driver="hyperkit")
	I0816 10:02:30.772276    3758 client.go:168] LocalClient.Create starting
	I0816 10:02:30.772303    3758 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem
	I0816 10:02:30.772358    3758 main.go:141] libmachine: Decoding PEM data...
	I0816 10:02:30.772368    3758 main.go:141] libmachine: Parsing certificate...
	I0816 10:02:30.772413    3758 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem
	I0816 10:02:30.772450    3758 main.go:141] libmachine: Decoding PEM data...
	I0816 10:02:30.772460    3758 main.go:141] libmachine: Parsing certificate...
	I0816 10:02:30.772472    3758 main.go:141] libmachine: Running pre-create checks...
	I0816 10:02:30.772477    3758 main.go:141] libmachine: (ha-286000-m02) Calling .PreCreateCheck
	I0816 10:02:30.772545    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:30.772568    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetConfigRaw
	I0816 10:02:30.781772    3758 main.go:141] libmachine: Creating machine...
	I0816 10:02:30.781789    3758 main.go:141] libmachine: (ha-286000-m02) Calling .Create
	I0816 10:02:30.781943    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:30.782181    3758 main.go:141] libmachine: (ha-286000-m02) DBG | I0816 10:02:30.781934    3805 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19461-1276/.minikube
	I0816 10:02:30.782269    3758 main.go:141] libmachine: (ha-286000-m02) Downloading /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19461-1276/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0816 10:02:30.982082    3758 main.go:141] libmachine: (ha-286000-m02) DBG | I0816 10:02:30.982020    3805 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/id_rsa...
	I0816 10:02:31.266609    3758 main.go:141] libmachine: (ha-286000-m02) DBG | I0816 10:02:31.266514    3805 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/ha-286000-m02.rawdisk...
	I0816 10:02:31.266629    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Writing magic tar header
	I0816 10:02:31.266638    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Writing SSH key tar header
	I0816 10:02:31.267382    3758 main.go:141] libmachine: (ha-286000-m02) DBG | I0816 10:02:31.267242    3805 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02 ...
	I0816 10:02:31.642864    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:31.642889    3758 main.go:141] libmachine: (ha-286000-m02) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/hyperkit.pid
	I0816 10:02:31.642912    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Using UUID f7630301-2bc3-4935-a751-ac72999da031
	I0816 10:02:31.669349    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Generated MAC 72:69:8f:11:68:1d
	I0816 10:02:31.669369    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000
	I0816 10:02:31.669397    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"f7630301-2bc3-4935-a751-ac72999da031", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 10:02:31.669426    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"f7630301-2bc3-4935-a751-ac72999da031", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 10:02:31.669455    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "f7630301-2bc3-4935-a751-ac72999da031", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/ha-286000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/bzimage,/Users/jenkins/minikube-integration/19461-1276/.minikube/machine
s/ha-286000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000"}
	I0816 10:02:31.669484    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U f7630301-2bc3-4935-a751-ac72999da031 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/ha-286000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/bzimage,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000"
	I0816 10:02:31.669499    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0816 10:02:31.672492    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 DEBUG: hyperkit: Pid is 3806
	I0816 10:02:31.672957    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Attempt 0
	I0816 10:02:31.673000    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:31.673054    3758 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid from json: 3806
	I0816 10:02:31.674014    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Searching for 72:69:8f:11:68:1d in /var/db/dhcpd_leases ...
	I0816 10:02:31.674086    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0816 10:02:31.674098    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:02:31.674120    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:02:31.674129    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:02:31.674137    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:02:31.680025    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0816 10:02:31.688109    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0816 10:02:31.688968    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:02:31.688993    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:02:31.689006    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:02:31.689016    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:02:32.077133    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:32 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0816 10:02:32.077153    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:32 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0816 10:02:32.191789    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:02:32.191806    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:02:32.191814    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:02:32.191825    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:02:32.192676    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:32 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0816 10:02:32.192686    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:32 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0816 10:02:33.675165    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Attempt 1
	I0816 10:02:33.675183    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:33.675297    3758 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid from json: 3806
	I0816 10:02:33.676089    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Searching for 72:69:8f:11:68:1d in /var/db/dhcpd_leases ...
	I0816 10:02:33.676141    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0816 10:02:33.676153    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:02:33.676162    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:02:33.676169    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:02:33.676193    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:02:35.676266    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Attempt 2
	I0816 10:02:35.676285    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:35.676360    3758 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid from json: 3806
	I0816 10:02:35.677182    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Searching for 72:69:8f:11:68:1d in /var/db/dhcpd_leases ...
	I0816 10:02:35.677237    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0816 10:02:35.677248    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:02:35.677262    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:02:35.677270    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:02:35.677281    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:02:37.678515    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Attempt 3
	I0816 10:02:37.678532    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:37.678572    3758 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid from json: 3806
	I0816 10:02:37.679405    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Searching for 72:69:8f:11:68:1d in /var/db/dhcpd_leases ...
	I0816 10:02:37.679441    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0816 10:02:37.679451    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:02:37.679461    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:02:37.679468    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:02:37.679483    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:02:37.782496    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:37 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0816 10:02:37.782531    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:37 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0816 10:02:37.782540    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:37 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0816 10:02:37.806630    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:37 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0816 10:02:39.680161    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Attempt 4
	I0816 10:02:39.680178    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:39.680273    3758 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid from json: 3806
	I0816 10:02:39.681064    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Searching for 72:69:8f:11:68:1d in /var/db/dhcpd_leases ...
	I0816 10:02:39.681112    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0816 10:02:39.681136    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:02:39.681155    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:02:39.681170    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:02:39.681181    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:02:41.681380    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Attempt 5
	I0816 10:02:41.681414    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:41.681563    3758 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid from json: 3806
	I0816 10:02:41.682343    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Searching for 72:69:8f:11:68:1d in /var/db/dhcpd_leases ...
	I0816 10:02:41.682392    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0816 10:02:41.682406    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0d7b0}
	I0816 10:02:41.682415    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Found match: 72:69:8f:11:68:1d
	I0816 10:02:41.682421    3758 main.go:141] libmachine: (ha-286000-m02) DBG | IP: 192.169.0.6
	I0816 10:02:41.682475    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetConfigRaw
	I0816 10:02:41.683135    3758 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:02:41.683257    3758 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:02:41.683358    3758 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0816 10:02:41.683367    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetState
	I0816 10:02:41.683478    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:41.683537    3758 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid from json: 3806
	I0816 10:02:41.684414    3758 main.go:141] libmachine: Detecting operating system of created instance...
	I0816 10:02:41.684423    3758 main.go:141] libmachine: Waiting for SSH to be available...
	I0816 10:02:41.684427    3758 main.go:141] libmachine: Getting to WaitForSSH function...
	I0816 10:02:41.684431    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:41.684566    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:41.684692    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:41.684809    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:41.684943    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:41.685095    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:41.685300    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:02:41.685308    3758 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0816 10:02:42.746559    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 10:02:42.746571    3758 main.go:141] libmachine: Detecting the provisioner...
	I0816 10:02:42.746577    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:42.746714    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:42.746824    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:42.746914    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:42.746988    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:42.747112    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:42.747269    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:02:42.747277    3758 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0816 10:02:42.811630    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0816 10:02:42.811666    3758 main.go:141] libmachine: found compatible host: buildroot
	I0816 10:02:42.811672    3758 main.go:141] libmachine: Provisioning with buildroot...
	I0816 10:02:42.811681    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetMachineName
	I0816 10:02:42.811816    3758 buildroot.go:166] provisioning hostname "ha-286000-m02"
	I0816 10:02:42.811828    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetMachineName
	I0816 10:02:42.811943    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:42.812045    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:42.812136    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:42.812274    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:42.812376    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:42.812513    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:42.812673    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:02:42.812682    3758 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-286000-m02 && echo "ha-286000-m02" | sudo tee /etc/hostname
	I0816 10:02:42.887531    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-286000-m02
	
	I0816 10:02:42.887545    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:42.887676    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:42.887769    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:42.887867    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:42.887953    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:42.888075    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:42.888214    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:02:42.888225    3758 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-286000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-286000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-286000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 10:02:42.959625    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 10:02:42.959641    3758 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19461-1276/.minikube CaCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19461-1276/.minikube}
	I0816 10:02:42.959649    3758 buildroot.go:174] setting up certificates
	I0816 10:02:42.959661    3758 provision.go:84] configureAuth start
	I0816 10:02:42.959666    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetMachineName
	I0816 10:02:42.959803    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetIP
	I0816 10:02:42.959899    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:42.959980    3758 provision.go:143] copyHostCerts
	I0816 10:02:42.960007    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem
	I0816 10:02:42.960055    3758 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem, removing ...
	I0816 10:02:42.960061    3758 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem
	I0816 10:02:42.960954    3758 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem (1078 bytes)
	I0816 10:02:42.961160    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem
	I0816 10:02:42.961190    3758 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem, removing ...
	I0816 10:02:42.961195    3758 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem
	I0816 10:02:42.961273    3758 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem (1123 bytes)
	I0816 10:02:42.961439    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem
	I0816 10:02:42.961509    3758 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem, removing ...
	I0816 10:02:42.961515    3758 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem
	I0816 10:02:42.961594    3758 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem (1679 bytes)
	I0816 10:02:42.961746    3758 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem org=jenkins.ha-286000-m02 san=[127.0.0.1 192.169.0.6 ha-286000-m02 localhost minikube]
	I0816 10:02:43.093511    3758 provision.go:177] copyRemoteCerts
	I0816 10:02:43.093556    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 10:02:43.093572    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:43.093719    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:43.093818    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:43.093917    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:43.094005    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/id_rsa Username:docker}
	I0816 10:02:43.132229    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0816 10:02:43.132299    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0816 10:02:43.151814    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0816 10:02:43.151873    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0816 10:02:43.171464    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0816 10:02:43.171532    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 10:02:43.191045    3758 provision.go:87] duration metric: took 231.381757ms to configureAuth
	I0816 10:02:43.191057    3758 buildroot.go:189] setting minikube options for container-runtime
	I0816 10:02:43.191187    3758 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:02:43.191200    3758 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:02:43.191321    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:43.191410    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:43.191497    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:43.191580    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:43.191658    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:43.191759    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:43.191891    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:02:43.191898    3758 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0816 10:02:43.256733    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0816 10:02:43.256744    3758 buildroot.go:70] root file system type: tmpfs
	I0816 10:02:43.256823    3758 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0816 10:02:43.256835    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:43.256961    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:43.257048    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:43.257142    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:43.257220    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:43.257364    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:43.257505    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:02:43.257553    3758 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0816 10:02:43.330709    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0816 10:02:43.330729    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:43.330874    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:43.330977    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:43.331073    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:43.331167    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:43.331293    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:43.331440    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:02:43.331451    3758 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0816 10:02:44.884634    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0816 10:02:44.884648    3758 main.go:141] libmachine: Checking connection to Docker...
	I0816 10:02:44.884655    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetURL
	I0816 10:02:44.884793    3758 main.go:141] libmachine: Docker is up and running!
	I0816 10:02:44.884799    3758 main.go:141] libmachine: Reticulating splines...
	I0816 10:02:44.884804    3758 client.go:171] duration metric: took 14.112778669s to LocalClient.Create
	I0816 10:02:44.884820    3758 start.go:167] duration metric: took 14.112810851s to libmachine.API.Create "ha-286000"
	I0816 10:02:44.884826    3758 start.go:293] postStartSetup for "ha-286000-m02" (driver="hyperkit")
	I0816 10:02:44.884832    3758 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 10:02:44.884843    3758 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:02:44.884991    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 10:02:44.885004    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:44.885101    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:44.885204    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:44.885335    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:44.885428    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/id_rsa Username:docker}
	I0816 10:02:44.925701    3758 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 10:02:44.929599    3758 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 10:02:44.929613    3758 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19461-1276/.minikube/addons for local assets ...
	I0816 10:02:44.929722    3758 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19461-1276/.minikube/files for local assets ...
	I0816 10:02:44.929908    3758 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> 18312.pem in /etc/ssl/certs
	I0816 10:02:44.929914    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> /etc/ssl/certs/18312.pem
	I0816 10:02:44.930119    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 10:02:44.940484    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem --> /etc/ssl/certs/18312.pem (1708 bytes)
	I0816 10:02:44.973200    3758 start.go:296] duration metric: took 88.36731ms for postStartSetup
	I0816 10:02:44.973230    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetConfigRaw
	I0816 10:02:44.973848    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetIP
	I0816 10:02:44.973996    3758 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:02:44.974351    3758 start.go:128] duration metric: took 14.23599979s to createHost
	I0816 10:02:44.974366    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:44.974448    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:44.974568    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:44.974660    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:44.974744    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:44.974844    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:44.974966    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:02:44.974973    3758 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 10:02:45.036267    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723827764.932365297
	
	I0816 10:02:45.036285    3758 fix.go:216] guest clock: 1723827764.932365297
	I0816 10:02:45.036291    3758 fix.go:229] Guest: 2024-08-16 10:02:44.932365297 -0700 PDT Remote: 2024-08-16 10:02:44.97436 -0700 PDT m=+56.899083401 (delta=-41.994703ms)
	I0816 10:02:45.036301    3758 fix.go:200] guest clock delta is within tolerance: -41.994703ms
	I0816 10:02:45.036306    3758 start.go:83] releasing machines lock for "ha-286000-m02", held for 14.298063452s
	I0816 10:02:45.036338    3758 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:02:45.036468    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetIP
	I0816 10:02:45.062334    3758 out.go:177] * Found network options:
	I0816 10:02:45.105996    3758 out.go:177]   - NO_PROXY=192.169.0.5
	W0816 10:02:45.128174    3758 proxy.go:119] fail to check proxy env: Error ip not in block
	I0816 10:02:45.128216    3758 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:02:45.128829    3758 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:02:45.128969    3758 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:02:45.129060    3758 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 10:02:45.129088    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	W0816 10:02:45.129115    3758 proxy.go:119] fail to check proxy env: Error ip not in block
	I0816 10:02:45.129222    3758 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0816 10:02:45.129232    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:45.129238    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:45.129370    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:45.129393    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:45.129500    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:45.129515    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:45.129631    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/id_rsa Username:docker}
	I0816 10:02:45.129647    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:45.129727    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/id_rsa Username:docker}
	W0816 10:02:45.164265    3758 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 10:02:45.164324    3758 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 10:02:45.211468    3758 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 10:02:45.211489    3758 start.go:495] detecting cgroup driver to use...
	I0816 10:02:45.211595    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 10:02:45.228144    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0816 10:02:45.237310    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0816 10:02:45.246272    3758 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0816 10:02:45.246316    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0816 10:02:45.255987    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 10:02:45.265400    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0816 10:02:45.274661    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 10:02:45.283628    3758 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 10:02:45.292648    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0816 10:02:45.302207    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0816 10:02:45.311314    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0816 10:02:45.320414    3758 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 10:02:45.328532    3758 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 10:02:45.337229    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:45.444954    3758 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0816 10:02:45.464569    3758 start.go:495] detecting cgroup driver to use...
	I0816 10:02:45.464652    3758 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0816 10:02:45.488292    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 10:02:45.500162    3758 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 10:02:45.561835    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 10:02:45.572027    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 10:02:45.582122    3758 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0816 10:02:45.603011    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 10:02:45.613488    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 10:02:45.628434    3758 ssh_runner.go:195] Run: which cri-dockerd
	I0816 10:02:45.631358    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0816 10:02:45.638496    3758 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0816 10:02:45.652676    3758 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0816 10:02:45.755971    3758 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0816 10:02:45.860533    3758 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0816 10:02:45.860559    3758 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0816 10:02:45.874570    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:45.976095    3758 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0816 10:02:48.459358    3758 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.483288325s)
	I0816 10:02:48.459417    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0816 10:02:48.469882    3758 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0816 10:02:48.484429    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0816 10:02:48.495038    3758 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0816 10:02:48.597129    3758 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0816 10:02:48.691412    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:48.797592    3758 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0816 10:02:48.812075    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0816 10:02:48.823307    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:48.918652    3758 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0816 10:02:48.980191    3758 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0816 10:02:48.980257    3758 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0816 10:02:48.984954    3758 start.go:563] Will wait 60s for crictl version
	I0816 10:02:48.985011    3758 ssh_runner.go:195] Run: which crictl
	I0816 10:02:48.988185    3758 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 10:02:49.015262    3758 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0816 10:02:49.015344    3758 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0816 10:02:49.032341    3758 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0816 10:02:49.078128    3758 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0816 10:02:49.137645    3758 out.go:177]   - env NO_PROXY=192.169.0.5
	I0816 10:02:49.176661    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetIP
	I0816 10:02:49.177129    3758 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0816 10:02:49.181778    3758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 10:02:49.191522    3758 mustload.go:65] Loading cluster: ha-286000
	I0816 10:02:49.191665    3758 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:02:49.191885    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:02:49.191909    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:02:49.200721    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51064
	I0816 10:02:49.201069    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:02:49.201413    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:02:49.201424    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:02:49.201648    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:02:49.201776    3758 main.go:141] libmachine: (ha-286000) Calling .GetState
	I0816 10:02:49.201862    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:49.201960    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:02:49.202920    3758 host.go:66] Checking if "ha-286000" exists ...
	I0816 10:02:49.203188    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:02:49.203205    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:02:49.211926    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51066
	I0816 10:02:49.212258    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:02:49.212635    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:02:49.212652    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:02:49.212860    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:02:49.212967    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:49.213056    3758 certs.go:68] Setting up /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000 for IP: 192.169.0.6
	I0816 10:02:49.213061    3758 certs.go:194] generating shared ca certs ...
	I0816 10:02:49.213071    3758 certs.go:226] acquiring lock for ca certs: {Name:mka549fbc467849834d2267da204028c49d88a3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:49.213229    3758 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key
	I0816 10:02:49.213305    3758 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key
	I0816 10:02:49.213314    3758 certs.go:256] generating profile certs ...
	I0816 10:02:49.213406    3758 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.key
	I0816 10:02:49.213429    3758 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.5eb44438
	I0816 10:02:49.213443    3758 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.5eb44438 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.254]
	I0816 10:02:49.263058    3758 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.5eb44438 ...
	I0816 10:02:49.263077    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.5eb44438: {Name:mk266d5e842df442c104f85113e06d171d5a89ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:49.263395    3758 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.5eb44438 ...
	I0816 10:02:49.263404    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.5eb44438: {Name:mk1428912a4c1edaafe55f9bff302c19ad337785 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:49.263623    3758 certs.go:381] copying /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.5eb44438 -> /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt
	I0816 10:02:49.263846    3758 certs.go:385] copying /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.5eb44438 -> /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key
	I0816 10:02:49.264093    3758 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key
	I0816 10:02:49.264103    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0816 10:02:49.264125    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0816 10:02:49.264143    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0816 10:02:49.264163    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0816 10:02:49.264180    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0816 10:02:49.264199    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0816 10:02:49.264223    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0816 10:02:49.264242    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0816 10:02:49.264331    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem (1338 bytes)
	W0816 10:02:49.264378    3758 certs.go:480] ignoring /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831_empty.pem, impossibly tiny 0 bytes
	I0816 10:02:49.264386    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 10:02:49.264418    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem (1078 bytes)
	I0816 10:02:49.264449    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem (1123 bytes)
	I0816 10:02:49.264477    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem (1679 bytes)
	I0816 10:02:49.264545    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem (1708 bytes)
	I0816 10:02:49.264593    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> /usr/share/ca-certificates/18312.pem
	I0816 10:02:49.264614    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:02:49.264634    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem -> /usr/share/ca-certificates/1831.pem
	I0816 10:02:49.264663    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:49.264814    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:49.264897    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:49.265014    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:49.265108    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:02:49.296192    3758 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0816 10:02:49.299577    3758 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0816 10:02:49.312765    3758 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0816 10:02:49.316551    3758 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0816 10:02:49.332167    3758 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0816 10:02:49.336199    3758 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0816 10:02:49.349166    3758 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0816 10:02:49.354242    3758 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0816 10:02:49.364119    3758 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0816 10:02:49.367349    3758 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0816 10:02:49.375423    3758 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0816 10:02:49.378717    3758 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0816 10:02:49.386918    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 10:02:49.408036    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0816 10:02:49.428163    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 10:02:49.448952    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 10:02:49.468986    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0816 10:02:49.489674    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0816 10:02:49.509597    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 10:02:49.530175    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 10:02:49.550578    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem --> /usr/share/ca-certificates/18312.pem (1708 bytes)
	I0816 10:02:49.571266    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 10:02:49.590995    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem --> /usr/share/ca-certificates/1831.pem (1338 bytes)
	I0816 10:02:49.611766    3758 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0816 10:02:49.625222    3758 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0816 10:02:49.638852    3758 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0816 10:02:49.653497    3758 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0816 10:02:49.667098    3758 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0816 10:02:49.680935    3758 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0816 10:02:49.695437    3758 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0816 10:02:49.708684    3758 ssh_runner.go:195] Run: openssl version
	I0816 10:02:49.712979    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18312.pem && ln -fs /usr/share/ca-certificates/18312.pem /etc/ssl/certs/18312.pem"
	I0816 10:02:49.721565    3758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18312.pem
	I0816 10:02:49.725513    3758 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 16:57 /usr/share/ca-certificates/18312.pem
	I0816 10:02:49.725580    3758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18312.pem
	I0816 10:02:49.730364    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18312.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 10:02:49.739184    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 10:02:49.747494    3758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:02:49.750923    3758 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 16:48 /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:02:49.750955    3758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:02:49.755195    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 10:02:49.763703    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1831.pem && ln -fs /usr/share/ca-certificates/1831.pem /etc/ssl/certs/1831.pem"
	I0816 10:02:49.772914    3758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1831.pem
	I0816 10:02:49.776377    3758 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 16:57 /usr/share/ca-certificates/1831.pem
	I0816 10:02:49.776421    3758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1831.pem
	I0816 10:02:49.780731    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1831.pem /etc/ssl/certs/51391683.0"
	I0816 10:02:49.789078    3758 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 10:02:49.792242    3758 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0816 10:02:49.792277    3758 kubeadm.go:934] updating node {m02 192.169.0.6 8443 v1.31.0 docker true true} ...
	I0816 10:02:49.792332    3758 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-286000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-286000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 10:02:49.792349    3758 kube-vip.go:115] generating kube-vip config ...
	I0816 10:02:49.792381    3758 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0816 10:02:49.804469    3758 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0816 10:02:49.804511    3758 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0816 10:02:49.804567    3758 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 10:02:49.812921    3758 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0816 10:02:49.812983    3758 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0816 10:02:49.821031    3758 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256 -> /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubelet
	I0816 10:02:49.821035    3758 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256 -> /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubeadm
	I0816 10:02:49.821039    3758 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubectl
	I0816 10:02:51.911279    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0816 10:02:51.911374    3758 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0816 10:02:51.915115    3758 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0816 10:02:51.915150    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0816 10:02:52.537289    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0816 10:02:52.537383    3758 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0816 10:02:52.540898    3758 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0816 10:02:52.540929    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0816 10:02:53.267467    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 10:02:53.278589    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0816 10:02:53.278731    3758 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0816 10:02:53.282055    3758 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0816 10:02:53.282073    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0816 10:02:53.498714    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0816 10:02:53.506112    3758 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0816 10:02:53.519672    3758 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 10:02:53.533551    3758 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0816 10:02:53.548024    3758 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0816 10:02:53.551124    3758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 10:02:53.560470    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:53.659270    3758 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 10:02:53.674625    3758 host.go:66] Checking if "ha-286000" exists ...
	I0816 10:02:53.674900    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:02:53.674923    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:02:53.683891    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51093
	I0816 10:02:53.684256    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:02:53.684641    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:02:53.684658    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:02:53.684885    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:02:53.685003    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:53.685084    3758 start.go:317] joinCluster: &{Name:ha-286000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clu
sterName:ha-286000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpira
tion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 10:02:53.685155    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0816 10:02:53.685167    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:53.685248    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:53.685339    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:53.685423    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:53.685503    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:02:53.765911    3758 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:02:53.765972    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token mpmdnf.6rzc0wpb01e0t6lz --discovery-token-ca-cert-hash sha256:995590413ea712c4f7db2cf849d28264ebc291e6cd53d741ce1d22c1daaa1602 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-286000-m02 --control-plane --apiserver-advertise-address=192.169.0.6 --apiserver-bind-port=8443"
	I0816 10:03:22.070391    3758 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token mpmdnf.6rzc0wpb01e0t6lz --discovery-token-ca-cert-hash sha256:995590413ea712c4f7db2cf849d28264ebc291e6cd53d741ce1d22c1daaa1602 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-286000-m02 --control-plane --apiserver-advertise-address=192.169.0.6 --apiserver-bind-port=8443": (28.304928658s)
	I0816 10:03:22.070414    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0816 10:03:22.427732    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-286000-m02 minikube.k8s.io/updated_at=2024_08_16T10_03_22_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd minikube.k8s.io/name=ha-286000 minikube.k8s.io/primary=false
	I0816 10:03:22.531211    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-286000-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0816 10:03:22.629050    3758 start.go:319] duration metric: took 28.944509117s to joinCluster
	I0816 10:03:22.629105    3758 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:03:22.629360    3758 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:03:22.652598    3758 out.go:177] * Verifying Kubernetes components...
	I0816 10:03:22.726472    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:03:22.952731    3758 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 10:03:22.975401    3758 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19461-1276/kubeconfig
	I0816 10:03:22.975621    3758 kapi.go:59] client config for ha-286000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.key", CAFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xa202f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0816 10:03:22.975663    3758 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0816 10:03:22.975836    3758 node_ready.go:35] waiting up to 6m0s for node "ha-286000-m02" to be "Ready" ...
	I0816 10:03:22.975886    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:22.975891    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:22.975897    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:22.975901    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:22.983180    3758 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0816 10:03:23.476314    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:23.476365    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:23.476380    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:23.476394    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:23.502874    3758 round_trippers.go:574] Response Status: 200 OK in 26 milliseconds
	I0816 10:03:23.977290    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:23.977304    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:23.977311    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:23.977315    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:23.979495    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:24.477835    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:24.477851    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:24.477858    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:24.477861    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:24.480701    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:24.977514    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:24.977568    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:24.977580    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:24.977588    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:24.980856    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:24.981299    3758 node_ready.go:53] node "ha-286000-m02" has status "Ready":"False"
	I0816 10:03:25.476235    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:25.476252    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:25.476260    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:25.476277    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:25.478567    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:25.976418    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:25.976469    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:25.976477    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:25.976480    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:25.978730    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:26.476423    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:26.476439    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:26.476445    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:26.476448    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:26.478424    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:26.975919    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:26.975934    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:26.975940    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:26.975945    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:26.977809    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:27.475966    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:27.475981    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:27.475987    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:27.475990    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:27.477769    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:27.478121    3758 node_ready.go:53] node "ha-286000-m02" has status "Ready":"False"
	I0816 10:03:27.977123    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:27.977139    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:27.977146    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:27.977150    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:27.979266    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:28.477140    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:28.477226    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:28.477254    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:28.477264    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:28.480386    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:28.977368    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:28.977407    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:28.977420    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:28.977427    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:28.980479    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:29.477521    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:29.477545    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:29.477556    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:29.477565    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:29.481179    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:29.481510    3758 node_ready.go:53] node "ha-286000-m02" has status "Ready":"False"
	I0816 10:03:29.975906    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:29.975932    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:29.975943    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:29.975949    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:29.978633    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:30.477171    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:30.477189    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:30.477198    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:30.477201    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:30.480005    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:30.977947    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:30.977973    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:30.977993    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:30.977998    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:30.982219    3758 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 10:03:31.476252    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:31.476327    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:31.476341    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:31.476347    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:31.479417    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:31.975987    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:31.976012    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:31.976023    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:31.976029    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:31.979409    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:31.979880    3758 node_ready.go:53] node "ha-286000-m02" has status "Ready":"False"
	I0816 10:03:32.477180    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:32.477207    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:32.477229    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:32.477240    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:32.480686    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:32.976225    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:32.976250    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:32.976261    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:32.976269    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:32.979533    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:33.475956    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:33.475980    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:33.475991    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:33.475997    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:33.479025    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:33.977111    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:33.977185    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:33.977198    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:33.977204    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:33.980426    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:33.980922    3758 node_ready.go:53] node "ha-286000-m02" has status "Ready":"False"
	I0816 10:03:34.476455    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:34.476470    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:34.476477    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:34.476481    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:34.478517    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:34.976996    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:34.977037    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:34.977049    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:34.977084    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:34.980500    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:35.476045    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:35.476068    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:35.476079    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:35.476086    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:35.479058    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:35.975920    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:35.975944    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:35.975956    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:35.975962    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:35.979071    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:36.477845    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:36.477872    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:36.477885    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:36.477946    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:36.481401    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:36.481890    3758 node_ready.go:53] node "ha-286000-m02" has status "Ready":"False"
	I0816 10:03:36.977033    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:36.977057    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:36.977069    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:36.977076    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:36.980476    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:37.477095    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:37.477120    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:37.477133    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:37.477141    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:37.480485    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:37.976303    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:37.976320    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:37.976343    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:37.976348    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:37.978551    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:38.476492    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:38.476517    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.476528    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.476536    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:38.480190    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:38.976091    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:38.976114    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.976124    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.976129    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:38.979741    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:38.980051    3758 node_ready.go:49] node "ha-286000-m02" has status "Ready":"True"
	I0816 10:03:38.980066    3758 node_ready.go:38] duration metric: took 16.004516347s for node "ha-286000-m02" to be "Ready" ...
	I0816 10:03:38.980077    3758 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 10:03:38.980127    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0816 10:03:38.980135    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.980142    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.980152    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:38.982937    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:38.987546    3758 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-2kqjf" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:38.987591    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-2kqjf
	I0816 10:03:38.987597    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.987603    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.987607    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:38.989339    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:38.989748    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:38.989758    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.989764    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.989768    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:38.991324    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:38.991660    3758 pod_ready.go:93] pod "coredns-6f6b679f8f-2kqjf" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:38.991671    3758 pod_ready.go:82] duration metric: took 4.111381ms for pod "coredns-6f6b679f8f-2kqjf" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:38.991678    3758 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-rfbz7" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:38.991708    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-rfbz7
	I0816 10:03:38.991713    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.991718    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.991721    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:38.993245    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:38.993624    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:38.993631    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.993640    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.993644    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:38.995095    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:38.995419    3758 pod_ready.go:93] pod "coredns-6f6b679f8f-rfbz7" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:38.995427    3758 pod_ready.go:82] duration metric: took 3.744646ms for pod "coredns-6f6b679f8f-rfbz7" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:38.995433    3758 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:38.995467    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-286000
	I0816 10:03:38.995472    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.995478    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.995482    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:38.997007    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:38.997390    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:38.997397    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.997403    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.997406    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:38.998839    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:38.999151    3758 pod_ready.go:93] pod "etcd-ha-286000" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:38.999160    3758 pod_ready.go:82] duration metric: took 3.721879ms for pod "etcd-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:38.999165    3758 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:38.999202    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-286000-m02
	I0816 10:03:38.999207    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.999213    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.999217    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:39.001189    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:39.001547    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:39.001554    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:39.001559    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:39.001563    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:39.003004    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:39.003290    3758 pod_ready.go:93] pod "etcd-ha-286000-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:39.003298    3758 pod_ready.go:82] duration metric: took 4.127582ms for pod "etcd-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:39.003307    3758 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:39.177834    3758 request.go:632] Waited for 174.443582ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-286000
	I0816 10:03:39.177948    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-286000
	I0816 10:03:39.177963    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:39.177974    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:39.177981    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:39.181371    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:39.376438    3758 request.go:632] Waited for 194.538822ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:39.376562    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:39.376574    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:39.376586    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:39.376596    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:39.379989    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:39.380425    3758 pod_ready.go:93] pod "kube-apiserver-ha-286000" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:39.380437    3758 pod_ready.go:82] duration metric: took 377.131323ms for pod "kube-apiserver-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:39.380446    3758 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:39.576633    3758 request.go:632] Waited for 196.095353ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-286000-m02
	I0816 10:03:39.576687    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-286000-m02
	I0816 10:03:39.576696    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:39.576769    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:39.576793    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:39.580164    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:39.776642    3758 request.go:632] Waited for 195.66937ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:39.776725    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:39.776735    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:39.776746    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:39.776752    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:39.780273    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:39.780714    3758 pod_ready.go:93] pod "kube-apiserver-ha-286000-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:39.780723    3758 pod_ready.go:82] duration metric: took 400.274102ms for pod "kube-apiserver-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:39.780730    3758 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:39.977598    3758 request.go:632] Waited for 196.714211ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-286000
	I0816 10:03:39.977650    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-286000
	I0816 10:03:39.977659    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:39.977670    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:39.977679    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:39.981079    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:40.177460    3758 request.go:632] Waited for 195.94045ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:40.177528    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:40.177589    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:40.177603    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:40.177609    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:40.180971    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:40.181992    3758 pod_ready.go:93] pod "kube-controller-manager-ha-286000" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:40.182036    3758 pod_ready.go:82] duration metric: took 401.277819ms for pod "kube-controller-manager-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:40.182047    3758 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:40.376258    3758 request.go:632] Waited for 194.111468ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-286000-m02
	I0816 10:03:40.376327    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-286000-m02
	I0816 10:03:40.376336    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:40.376347    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:40.376355    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:40.379476    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:40.576506    3758 request.go:632] Waited for 196.409077ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:40.576552    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:40.576560    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:40.576571    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:40.576580    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:40.579964    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:40.580641    3758 pod_ready.go:93] pod "kube-controller-manager-ha-286000-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:40.580654    3758 pod_ready.go:82] duration metric: took 398.607786ms for pod "kube-controller-manager-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:40.580663    3758 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pt669" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:40.778194    3758 request.go:632] Waited for 197.469682ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pt669
	I0816 10:03:40.778324    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pt669
	I0816 10:03:40.778333    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:40.778345    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:40.778352    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:40.781925    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:40.976623    3758 request.go:632] Waited for 194.04689ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:40.976752    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:40.976761    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:40.976772    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:40.976780    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:40.980022    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:40.980473    3758 pod_ready.go:93] pod "kube-proxy-pt669" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:40.980485    3758 pod_ready.go:82] duration metric: took 399.822578ms for pod "kube-proxy-pt669" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:40.980494    3758 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-w4nt2" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:41.177361    3758 request.go:632] Waited for 196.811255ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w4nt2
	I0816 10:03:41.177533    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w4nt2
	I0816 10:03:41.177545    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:41.177556    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:41.177563    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:41.181543    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:41.377692    3758 request.go:632] Waited for 195.583238ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:41.377791    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:41.377802    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:41.377812    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:41.377819    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:41.381134    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:41.381716    3758 pod_ready.go:93] pod "kube-proxy-w4nt2" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:41.381726    3758 pod_ready.go:82] duration metric: took 401.234529ms for pod "kube-proxy-w4nt2" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:41.381733    3758 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:41.577020    3758 request.go:632] Waited for 195.246357ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-286000
	I0816 10:03:41.577073    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-286000
	I0816 10:03:41.577125    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:41.577138    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:41.577147    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:41.580051    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:41.777879    3758 request.go:632] Waited for 197.366145ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:41.777974    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:41.777983    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:41.777995    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:41.778003    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:41.781434    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:41.782012    3758 pod_ready.go:93] pod "kube-scheduler-ha-286000" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:41.782023    3758 pod_ready.go:82] duration metric: took 400.292624ms for pod "kube-scheduler-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:41.782051    3758 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:41.976356    3758 request.go:632] Waited for 194.23341ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-286000-m02
	I0816 10:03:41.976463    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-286000-m02
	I0816 10:03:41.976473    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:41.976485    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:41.976494    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:41.980088    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:42.176952    3758 request.go:632] Waited for 196.161737ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:42.177009    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:42.177018    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:42.177026    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:42.177083    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:42.182888    3758 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0816 10:03:42.183179    3758 pod_ready.go:93] pod "kube-scheduler-ha-286000-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:42.183188    3758 pod_ready.go:82] duration metric: took 401.133714ms for pod "kube-scheduler-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:42.183203    3758 pod_ready.go:39] duration metric: took 3.203173842s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 10:03:42.183221    3758 api_server.go:52] waiting for apiserver process to appear ...
	I0816 10:03:42.183274    3758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 10:03:42.195671    3758 api_server.go:72] duration metric: took 19.566910589s to wait for apiserver process to appear ...
	I0816 10:03:42.195682    3758 api_server.go:88] waiting for apiserver healthz status ...
	I0816 10:03:42.195698    3758 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0816 10:03:42.198837    3758 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0816 10:03:42.198870    3758 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0816 10:03:42.198874    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:42.198880    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:42.198884    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:42.199389    3758 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0816 10:03:42.199468    3758 api_server.go:141] control plane version: v1.31.0
	I0816 10:03:42.199478    3758 api_server.go:131] duration metric: took 3.791893ms to wait for apiserver health ...
	I0816 10:03:42.199486    3758 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 10:03:42.378048    3758 request.go:632] Waited for 178.500938ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0816 10:03:42.378156    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0816 10:03:42.378166    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:42.378178    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:42.378186    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:42.382776    3758 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 10:03:42.386068    3758 system_pods.go:59] 17 kube-system pods found
	I0816 10:03:42.386083    3758 system_pods.go:61] "coredns-6f6b679f8f-2kqjf" [be18cd09-3e3b-4749-bf29-7001a879f593] Running
	I0816 10:03:42.386087    3758 system_pods.go:61] "coredns-6f6b679f8f-rfbz7" [a593e27a-e38b-46c7-a603-44963c31c095] Running
	I0816 10:03:42.386090    3758 system_pods.go:61] "etcd-ha-286000" [c45bc623-befe-4e88-8da1-bb4f7d7be5b2] Running
	I0816 10:03:42.386093    3758 system_pods.go:61] "etcd-ha-286000-m02" [e14fbe0c-4527-4cbb-8c13-01c0b32a8d15] Running
	I0816 10:03:42.386096    3758 system_pods.go:61] "kindnet-t9kjf" [20c57f28-5e86-44a4-8a2a-07e1e240abf2] Running
	I0816 10:03:42.386099    3758 system_pods.go:61] "kindnet-whqxb" [1cc5291b-52ea-44b5-b87c-607d46b5281a] Running
	I0816 10:03:42.386102    3758 system_pods.go:61] "kube-apiserver-ha-286000" [c0d298a9-931b-4084-9e5f-01a88de27456] Running
	I0816 10:03:42.386104    3758 system_pods.go:61] "kube-apiserver-ha-286000-m02" [3666c131-2b88-483a-ab8c-b18cb8db7d6f] Running
	I0816 10:03:42.386120    3758 system_pods.go:61] "kube-controller-manager-ha-286000" [5a4f4c5d-d89a-4e5f-b022-479ca31f64cd] Running
	I0816 10:03:42.386127    3758 system_pods.go:61] "kube-controller-manager-ha-286000-m02" [a347c3d4-fa47-49ea-93a0-83087017a2bd] Running
	I0816 10:03:42.386130    3758 system_pods.go:61] "kube-proxy-pt669" [798c956f-8f08-465a-b0d9-90b65f703f80] Running
	I0816 10:03:42.386139    3758 system_pods.go:61] "kube-proxy-w4nt2" [79ce2248-f8fd-4a3b-aeb7-f81d7ad64564] Running
	I0816 10:03:42.386143    3758 system_pods.go:61] "kube-scheduler-ha-286000" [b4af80e0-cbb0-4257-a212-3585faff0875] Running
	I0816 10:03:42.386145    3758 system_pods.go:61] "kube-scheduler-ha-286000-m02" [89920e93-c959-47ec-8a2c-f2b63e8c3d3a] Running
	I0816 10:03:42.386153    3758 system_pods.go:61] "kube-vip-ha-286000" [b0f85be2-2afb-44b0-a701-161b24529349] Running
	I0816 10:03:42.386156    3758 system_pods.go:61] "kube-vip-ha-286000-m02" [4982dc53-946d-4f75-8e9e-452ef39ff2fa] Running
	I0816 10:03:42.386159    3758 system_pods.go:61] "storage-provisioner" [4805d53b-2db3-4092-a3f2-d4a854e93adc] Running
	I0816 10:03:42.386163    3758 system_pods.go:74] duration metric: took 186.675963ms to wait for pod list to return data ...
	I0816 10:03:42.386170    3758 default_sa.go:34] waiting for default service account to be created ...
	I0816 10:03:42.577030    3758 request.go:632] Waited for 190.795237ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0816 10:03:42.577183    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0816 10:03:42.577198    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:42.577209    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:42.577215    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:42.581020    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:42.581204    3758 default_sa.go:45] found service account: "default"
	I0816 10:03:42.581216    3758 default_sa.go:55] duration metric: took 195.045213ms for default service account to be created ...
	I0816 10:03:42.581224    3758 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 10:03:42.777160    3758 request.go:632] Waited for 195.848894ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0816 10:03:42.777252    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0816 10:03:42.777268    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:42.777285    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:42.777303    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:42.781637    3758 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 10:03:42.784962    3758 system_pods.go:86] 17 kube-system pods found
	I0816 10:03:42.784974    3758 system_pods.go:89] "coredns-6f6b679f8f-2kqjf" [be18cd09-3e3b-4749-bf29-7001a879f593] Running
	I0816 10:03:42.784978    3758 system_pods.go:89] "coredns-6f6b679f8f-rfbz7" [a593e27a-e38b-46c7-a603-44963c31c095] Running
	I0816 10:03:42.784981    3758 system_pods.go:89] "etcd-ha-286000" [c45bc623-befe-4e88-8da1-bb4f7d7be5b2] Running
	I0816 10:03:42.784984    3758 system_pods.go:89] "etcd-ha-286000-m02" [e14fbe0c-4527-4cbb-8c13-01c0b32a8d15] Running
	I0816 10:03:42.784987    3758 system_pods.go:89] "kindnet-t9kjf" [20c57f28-5e86-44a4-8a2a-07e1e240abf2] Running
	I0816 10:03:42.784990    3758 system_pods.go:89] "kindnet-whqxb" [1cc5291b-52ea-44b5-b87c-607d46b5281a] Running
	I0816 10:03:42.784992    3758 system_pods.go:89] "kube-apiserver-ha-286000" [c0d298a9-931b-4084-9e5f-01a88de27456] Running
	I0816 10:03:42.784995    3758 system_pods.go:89] "kube-apiserver-ha-286000-m02" [3666c131-2b88-483a-ab8c-b18cb8db7d6f] Running
	I0816 10:03:42.784997    3758 system_pods.go:89] "kube-controller-manager-ha-286000" [5a4f4c5d-d89a-4e5f-b022-479ca31f64cd] Running
	I0816 10:03:42.785001    3758 system_pods.go:89] "kube-controller-manager-ha-286000-m02" [a347c3d4-fa47-49ea-93a0-83087017a2bd] Running
	I0816 10:03:42.785003    3758 system_pods.go:89] "kube-proxy-pt669" [798c956f-8f08-465a-b0d9-90b65f703f80] Running
	I0816 10:03:42.785006    3758 system_pods.go:89] "kube-proxy-w4nt2" [79ce2248-f8fd-4a3b-aeb7-f81d7ad64564] Running
	I0816 10:03:42.785009    3758 system_pods.go:89] "kube-scheduler-ha-286000" [b4af80e0-cbb0-4257-a212-3585faff0875] Running
	I0816 10:03:42.785011    3758 system_pods.go:89] "kube-scheduler-ha-286000-m02" [89920e93-c959-47ec-8a2c-f2b63e8c3d3a] Running
	I0816 10:03:42.785015    3758 system_pods.go:89] "kube-vip-ha-286000" [b0f85be2-2afb-44b0-a701-161b24529349] Running
	I0816 10:03:42.785017    3758 system_pods.go:89] "kube-vip-ha-286000-m02" [4982dc53-946d-4f75-8e9e-452ef39ff2fa] Running
	I0816 10:03:42.785023    3758 system_pods.go:89] "storage-provisioner" [4805d53b-2db3-4092-a3f2-d4a854e93adc] Running
	I0816 10:03:42.785028    3758 system_pods.go:126] duration metric: took 203.80313ms to wait for k8s-apps to be running ...
	I0816 10:03:42.785034    3758 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 10:03:42.785088    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 10:03:42.796381    3758 system_svc.go:56] duration metric: took 11.343439ms WaitForService to wait for kubelet
	I0816 10:03:42.796397    3758 kubeadm.go:582] duration metric: took 20.16764951s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 10:03:42.796409    3758 node_conditions.go:102] verifying NodePressure condition ...
	I0816 10:03:42.977594    3758 request.go:632] Waited for 181.143005ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0816 10:03:42.977695    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0816 10:03:42.977706    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:42.977719    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:42.977724    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:42.981373    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:42.981988    3758 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 10:03:42.982013    3758 node_conditions.go:123] node cpu capacity is 2
	I0816 10:03:42.982039    3758 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 10:03:42.982043    3758 node_conditions.go:123] node cpu capacity is 2
	I0816 10:03:42.982046    3758 node_conditions.go:105] duration metric: took 185.637747ms to run NodePressure ...
	I0816 10:03:42.982055    3758 start.go:241] waiting for startup goroutines ...
	I0816 10:03:42.982079    3758 start.go:255] writing updated cluster config ...
	I0816 10:03:43.003740    3758 out.go:201] 
	I0816 10:03:43.024885    3758 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:03:43.024976    3758 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:03:43.047776    3758 out.go:177] * Starting "ha-286000-m03" control-plane node in "ha-286000" cluster
	I0816 10:03:43.137621    3758 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 10:03:43.137655    3758 cache.go:56] Caching tarball of preloaded images
	I0816 10:03:43.137861    3758 preload.go:172] Found /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0816 10:03:43.137884    3758 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 10:03:43.138022    3758 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:03:43.159475    3758 start.go:360] acquireMachinesLock for ha-286000-m03: {Name:mk0510f0432b1e24fd07d899155e85dff8756d5a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:03:43.159592    3758 start.go:364] duration metric: took 91.442µs to acquireMachinesLock for "ha-286000-m03"
	I0816 10:03:43.159625    3758 start.go:93] Provisioning new machine with config: &{Name:ha-286000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:ha-286000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ing
ress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:03:43.159736    3758 start.go:125] createHost starting for "m03" (driver="hyperkit")
	I0816 10:03:43.180569    3758 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0816 10:03:43.180719    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:03:43.180762    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:03:43.191087    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51098
	I0816 10:03:43.191558    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:03:43.191889    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:03:43.191899    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:03:43.192106    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:03:43.192302    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetMachineName
	I0816 10:03:43.192394    3758 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:03:43.192506    3758 start.go:159] libmachine.API.Create for "ha-286000" (driver="hyperkit")
	I0816 10:03:43.192526    3758 client.go:168] LocalClient.Create starting
	I0816 10:03:43.192559    3758 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem
	I0816 10:03:43.192606    3758 main.go:141] libmachine: Decoding PEM data...
	I0816 10:03:43.192620    3758 main.go:141] libmachine: Parsing certificate...
	I0816 10:03:43.192675    3758 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem
	I0816 10:03:43.192704    3758 main.go:141] libmachine: Decoding PEM data...
	I0816 10:03:43.192714    3758 main.go:141] libmachine: Parsing certificate...
	I0816 10:03:43.192726    3758 main.go:141] libmachine: Running pre-create checks...
	I0816 10:03:43.192731    3758 main.go:141] libmachine: (ha-286000-m03) Calling .PreCreateCheck
	I0816 10:03:43.192813    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:43.192833    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetConfigRaw
	I0816 10:03:43.202038    3758 main.go:141] libmachine: Creating machine...
	I0816 10:03:43.202062    3758 main.go:141] libmachine: (ha-286000-m03) Calling .Create
	I0816 10:03:43.202302    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:43.202574    3758 main.go:141] libmachine: (ha-286000-m03) DBG | I0816 10:03:43.202284    3848 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19461-1276/.minikube
	I0816 10:03:43.202705    3758 main.go:141] libmachine: (ha-286000-m03) Downloading /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19461-1276/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0816 10:03:43.529965    3758 main.go:141] libmachine: (ha-286000-m03) DBG | I0816 10:03:43.529902    3848 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/id_rsa...
	I0816 10:03:43.631057    3758 main.go:141] libmachine: (ha-286000-m03) DBG | I0816 10:03:43.630965    3848 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/ha-286000-m03.rawdisk...
	I0816 10:03:43.631074    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Writing magic tar header
	I0816 10:03:43.631083    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Writing SSH key tar header
	I0816 10:03:43.631711    3758 main.go:141] libmachine: (ha-286000-m03) DBG | I0816 10:03:43.631681    3848 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03 ...
	I0816 10:03:44.155270    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:44.155286    3758 main.go:141] libmachine: (ha-286000-m03) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/hyperkit.pid
	I0816 10:03:44.155318    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Using UUID 408efb18-ff91-428f-b414-6eb0d982a779
	I0816 10:03:44.181981    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Generated MAC 8a:e:de:5b:b5:8b
	I0816 10:03:44.182010    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000
	I0816 10:03:44.182059    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"408efb18-ff91-428f-b414-6eb0d982a779", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 10:03:44.182101    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"408efb18-ff91-428f-b414-6eb0d982a779", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 10:03:44.182228    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "408efb18-ff91-428f-b414-6eb0d982a779", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/ha-286000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/bzimage,/Users/jenkins/minikube-integration/19461-1276/.minikube/machine
s/ha-286000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000"}
	I0816 10:03:44.182306    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 408efb18-ff91-428f-b414-6eb0d982a779 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/ha-286000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/bzimage,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000"
	I0816 10:03:44.182332    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0816 10:03:44.185479    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 DEBUG: hyperkit: Pid is 3849
	I0816 10:03:44.185900    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Attempt 0
	I0816 10:03:44.185912    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:44.186025    3758 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid from json: 3849
	I0816 10:03:44.186994    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Searching for 8a:e:de:5b:b5:8b in /var/db/dhcpd_leases ...
	I0816 10:03:44.187062    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0816 10:03:44.187079    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0d7b0}
	I0816 10:03:44.187114    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:03:44.187142    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:03:44.187168    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:03:44.187185    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:03:44.193352    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0816 10:03:44.201781    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0816 10:03:44.202595    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:03:44.202610    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:03:44.202637    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:03:44.202653    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:03:44.588441    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0816 10:03:44.588458    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0816 10:03:44.703100    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:03:44.703117    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:03:44.703125    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:03:44.703135    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:03:44.703952    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0816 10:03:44.703963    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0816 10:03:46.188345    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Attempt 1
	I0816 10:03:46.188360    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:46.188480    3758 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid from json: 3849
	I0816 10:03:46.189255    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Searching for 8a:e:de:5b:b5:8b in /var/db/dhcpd_leases ...
	I0816 10:03:46.189306    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0816 10:03:46.189321    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0d7b0}
	I0816 10:03:46.189335    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:03:46.189352    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:03:46.189359    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:03:46.189385    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:03:48.190823    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Attempt 2
	I0816 10:03:48.190838    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:48.190916    3758 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid from json: 3849
	I0816 10:03:48.191692    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Searching for 8a:e:de:5b:b5:8b in /var/db/dhcpd_leases ...
	I0816 10:03:48.191747    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0816 10:03:48.191757    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0d7b0}
	I0816 10:03:48.191779    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:03:48.191787    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:03:48.191803    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:03:48.191815    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:03:50.191897    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Attempt 3
	I0816 10:03:50.191913    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:50.191977    3758 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid from json: 3849
	I0816 10:03:50.192744    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Searching for 8a:e:de:5b:b5:8b in /var/db/dhcpd_leases ...
	I0816 10:03:50.192793    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0816 10:03:50.192802    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0d7b0}
	I0816 10:03:50.192811    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:03:50.192820    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:03:50.192836    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:03:50.192850    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:03:50.310705    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:50 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0816 10:03:50.310770    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:50 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0816 10:03:50.310783    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:50 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0816 10:03:50.334054    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:50 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0816 10:03:52.193443    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Attempt 4
	I0816 10:03:52.193459    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:52.193535    3758 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid from json: 3849
	I0816 10:03:52.194319    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Searching for 8a:e:de:5b:b5:8b in /var/db/dhcpd_leases ...
	I0816 10:03:52.194367    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0816 10:03:52.194379    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0d7b0}
	I0816 10:03:52.194390    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:03:52.194399    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:03:52.194408    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:03:52.194419    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:03:54.195434    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Attempt 5
	I0816 10:03:54.195456    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:54.195540    3758 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid from json: 3849
	I0816 10:03:54.196406    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Searching for 8a:e:de:5b:b5:8b in /var/db/dhcpd_leases ...
	I0816 10:03:54.196510    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Found 6 entries in /var/db/dhcpd_leases!
	I0816 10:03:54.196530    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66c0d7f9}
	I0816 10:03:54.196551    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Found match: 8a:e:de:5b:b5:8b
	I0816 10:03:54.196553    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetConfigRaw
	I0816 10:03:54.196565    3758 main.go:141] libmachine: (ha-286000-m03) DBG | IP: 192.169.0.7
	I0816 10:03:54.197236    3758 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:03:54.197351    3758 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:03:54.197441    3758 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0816 10:03:54.197454    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetState
	I0816 10:03:54.197541    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:54.197611    3758 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid from json: 3849
	I0816 10:03:54.198439    3758 main.go:141] libmachine: Detecting operating system of created instance...
	I0816 10:03:54.198446    3758 main.go:141] libmachine: Waiting for SSH to be available...
	I0816 10:03:54.198450    3758 main.go:141] libmachine: Getting to WaitForSSH function...
	I0816 10:03:54.198454    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:54.198532    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:54.198612    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:54.198695    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:54.198783    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:54.198892    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:03:54.199063    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:03:54.199071    3758 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0816 10:03:55.266165    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 10:03:55.266179    3758 main.go:141] libmachine: Detecting the provisioner...
	I0816 10:03:55.266185    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:55.266318    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:55.266413    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.266507    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.266601    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:55.266731    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:03:55.266879    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:03:55.266886    3758 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0816 10:03:55.332778    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0816 10:03:55.332822    3758 main.go:141] libmachine: found compatible host: buildroot
	I0816 10:03:55.332828    3758 main.go:141] libmachine: Provisioning with buildroot...
	I0816 10:03:55.332834    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetMachineName
	I0816 10:03:55.332971    3758 buildroot.go:166] provisioning hostname "ha-286000-m03"
	I0816 10:03:55.332982    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetMachineName
	I0816 10:03:55.333079    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:55.333186    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:55.333277    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.333375    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.333463    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:55.333593    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:03:55.333737    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:03:55.333746    3758 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-286000-m03 && echo "ha-286000-m03" | sudo tee /etc/hostname
	I0816 10:03:55.411701    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-286000-m03
	
	I0816 10:03:55.411715    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:55.411851    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:55.411965    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.412057    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.412140    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:55.412274    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:03:55.412418    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:03:55.412429    3758 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-286000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-286000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-286000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 10:03:55.485102    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 10:03:55.485118    3758 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19461-1276/.minikube CaCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19461-1276/.minikube}
	I0816 10:03:55.485127    3758 buildroot.go:174] setting up certificates
	I0816 10:03:55.485134    3758 provision.go:84] configureAuth start
	I0816 10:03:55.485141    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetMachineName
	I0816 10:03:55.485284    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetIP
	I0816 10:03:55.485375    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:55.485445    3758 provision.go:143] copyHostCerts
	I0816 10:03:55.485474    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem
	I0816 10:03:55.485527    3758 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem, removing ...
	I0816 10:03:55.485533    3758 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem
	I0816 10:03:55.485654    3758 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem (1123 bytes)
	I0816 10:03:55.485864    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem
	I0816 10:03:55.485895    3758 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem, removing ...
	I0816 10:03:55.485900    3758 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem
	I0816 10:03:55.486003    3758 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem (1679 bytes)
	I0816 10:03:55.486172    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem
	I0816 10:03:55.486206    3758 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem, removing ...
	I0816 10:03:55.486215    3758 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem
	I0816 10:03:55.486286    3758 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem (1078 bytes)
	I0816 10:03:55.486435    3758 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem org=jenkins.ha-286000-m03 san=[127.0.0.1 192.169.0.7 ha-286000-m03 localhost minikube]
	I0816 10:03:55.572803    3758 provision.go:177] copyRemoteCerts
	I0816 10:03:55.572855    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 10:03:55.572869    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:55.573015    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:55.573114    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.573208    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:55.573304    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/id_rsa Username:docker}
	I0816 10:03:55.612104    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0816 10:03:55.612186    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0816 10:03:55.632484    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0816 10:03:55.632548    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 10:03:55.652677    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0816 10:03:55.652752    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0816 10:03:55.672555    3758 provision.go:87] duration metric: took 187.4165ms to configureAuth
	I0816 10:03:55.672568    3758 buildroot.go:189] setting minikube options for container-runtime
	I0816 10:03:55.672735    3758 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:03:55.672748    3758 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:03:55.672889    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:55.672982    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:55.673071    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.673157    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.673245    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:55.673371    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:03:55.673496    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:03:55.673504    3758 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0816 10:03:55.738053    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0816 10:03:55.738064    3758 buildroot.go:70] root file system type: tmpfs
	I0816 10:03:55.738156    3758 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0816 10:03:55.738167    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:55.738306    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:55.738397    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.738489    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.738573    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:55.738694    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:03:55.738841    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:03:55.738890    3758 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0816 10:03:55.813774    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0816 10:03:55.813790    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:55.813924    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:55.814019    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.814103    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.814193    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:55.814320    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:03:55.814470    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:03:55.814484    3758 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0816 10:03:57.356529    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0816 10:03:57.356544    3758 main.go:141] libmachine: Checking connection to Docker...
	I0816 10:03:57.356549    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetURL
	I0816 10:03:57.356692    3758 main.go:141] libmachine: Docker is up and running!
	I0816 10:03:57.356700    3758 main.go:141] libmachine: Reticulating splines...
	I0816 10:03:57.356705    3758 client.go:171] duration metric: took 14.164443579s to LocalClient.Create
	I0816 10:03:57.356717    3758 start.go:167] duration metric: took 14.164482312s to libmachine.API.Create "ha-286000"
	I0816 10:03:57.356727    3758 start.go:293] postStartSetup for "ha-286000-m03" (driver="hyperkit")
	I0816 10:03:57.356734    3758 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 10:03:57.356744    3758 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:03:57.356919    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 10:03:57.356935    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:57.357030    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:57.357130    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:57.357273    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:57.357390    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/id_rsa Username:docker}
	I0816 10:03:57.398231    3758 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 10:03:57.402472    3758 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 10:03:57.402483    3758 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19461-1276/.minikube/addons for local assets ...
	I0816 10:03:57.402571    3758 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19461-1276/.minikube/files for local assets ...
	I0816 10:03:57.402723    3758 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> 18312.pem in /etc/ssl/certs
	I0816 10:03:57.402729    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> /etc/ssl/certs/18312.pem
	I0816 10:03:57.402895    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 10:03:57.412226    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem --> /etc/ssl/certs/18312.pem (1708 bytes)
	I0816 10:03:57.443242    3758 start.go:296] duration metric: took 86.507779ms for postStartSetup
	I0816 10:03:57.443268    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetConfigRaw
	I0816 10:03:57.443871    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetIP
	I0816 10:03:57.444031    3758 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:03:57.444386    3758 start.go:128] duration metric: took 14.284913269s to createHost
	I0816 10:03:57.444401    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:57.444490    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:57.444568    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:57.444658    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:57.444732    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:57.444842    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:03:57.444963    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:03:57.444971    3758 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 10:03:57.509634    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723827837.586061636
	
	I0816 10:03:57.509648    3758 fix.go:216] guest clock: 1723827837.586061636
	I0816 10:03:57.509653    3758 fix.go:229] Guest: 2024-08-16 10:03:57.586061636 -0700 PDT Remote: 2024-08-16 10:03:57.444395 -0700 PDT m=+129.370490857 (delta=141.666636ms)
	I0816 10:03:57.509665    3758 fix.go:200] guest clock delta is within tolerance: 141.666636ms
	I0816 10:03:57.509669    3758 start.go:83] releasing machines lock for "ha-286000-m03", held for 14.350339851s
	I0816 10:03:57.509691    3758 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:03:57.509832    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetIP
	I0816 10:03:57.542277    3758 out.go:177] * Found network options:
	I0816 10:03:57.562266    3758 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6
	W0816 10:03:57.583040    3758 proxy.go:119] fail to check proxy env: Error ip not in block
	W0816 10:03:57.583073    3758 proxy.go:119] fail to check proxy env: Error ip not in block
	I0816 10:03:57.583092    3758 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:03:57.583964    3758 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:03:57.584310    3758 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:03:57.584469    3758 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 10:03:57.584522    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	W0816 10:03:57.584570    3758 proxy.go:119] fail to check proxy env: Error ip not in block
	W0816 10:03:57.584596    3758 proxy.go:119] fail to check proxy env: Error ip not in block
	I0816 10:03:57.584708    3758 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0816 10:03:57.584735    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:57.584813    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:57.584995    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:57.585023    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:57.585164    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:57.585195    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:57.585338    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/id_rsa Username:docker}
	I0816 10:03:57.585406    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:57.585566    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/id_rsa Username:docker}
	W0816 10:03:57.623481    3758 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 10:03:57.623541    3758 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 10:03:57.670278    3758 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 10:03:57.670298    3758 start.go:495] detecting cgroup driver to use...
	I0816 10:03:57.670408    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 10:03:57.686134    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0816 10:03:57.694681    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0816 10:03:57.703134    3758 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0816 10:03:57.703198    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0816 10:03:57.711736    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 10:03:57.720041    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0816 10:03:57.728421    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 10:03:57.737252    3758 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 10:03:57.746356    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0816 10:03:57.755117    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0816 10:03:57.763962    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0816 10:03:57.772639    3758 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 10:03:57.780684    3758 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 10:03:57.788938    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:03:57.893932    3758 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0816 10:03:57.913150    3758 start.go:495] detecting cgroup driver to use...
	I0816 10:03:57.913224    3758 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0816 10:03:57.932274    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 10:03:57.945000    3758 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 10:03:57.965222    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 10:03:57.977460    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 10:03:57.988259    3758 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0816 10:03:58.006552    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 10:03:58.017261    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 10:03:58.032545    3758 ssh_runner.go:195] Run: which cri-dockerd
	I0816 10:03:58.035464    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0816 10:03:58.042742    3758 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0816 10:03:58.056747    3758 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0816 10:03:58.163471    3758 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0816 10:03:58.281020    3758 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0816 10:03:58.281044    3758 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0816 10:03:58.294992    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:03:58.399122    3758 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0816 10:04:59.415916    3758 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.017935847s)
	I0816 10:04:59.415981    3758 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0816 10:04:59.453653    3758 out.go:201] 
	W0816 10:04:59.474040    3758 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 16 17:03:56 ha-286000-m03 systemd[1]: Starting Docker Application Container Engine...
	Aug 16 17:03:56 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:56.200976866Z" level=info msg="Starting up"
	Aug 16 17:03:56 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:56.201455258Z" level=info msg="containerd not running, starting managed containerd"
	Aug 16 17:03:56 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:56.201908035Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=519
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.218394236Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.233514110Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.233584256Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.233654513Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.233690141Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.233768300Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.233806284Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.233955562Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.233996222Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.234026670Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.234054929Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.234137090Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.234382642Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.235934793Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.235985918Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.236117022Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.236159609Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.236256962Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.236327154Z" level=info msg="metadata content store policy set" policy=shared
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.238422470Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.238509436Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.238555940Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.238594343Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.238633954Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.238734892Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.238944917Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239083528Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239123172Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239153894Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239184820Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239214617Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239248728Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239285992Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239333696Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239368624Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239411950Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239451554Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239490333Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239523121Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239554681Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239589296Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239621390Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239653930Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239683266Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239712579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239742056Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239772828Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239801901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239830636Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239859570Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239893189Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239929555Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239960582Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239991787Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240076099Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240121809Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240153088Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240182438Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240210918Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240240067Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240268586Z" level=info msg="NRI interface is disabled by configuration."
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240520719Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240609661Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240710485Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240795617Z" level=info msg="containerd successfully booted in 0.023088s"
	Aug 16 17:03:57 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:57.221812848Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 16 17:03:57 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:57.228854621Z" level=info msg="Loading containers: start."
	Aug 16 17:03:57 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:57.312023793Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 16 17:03:57 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:57.390459780Z" level=info msg="Loading containers: done."
	Aug 16 17:03:57 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:57.404746652Z" level=info msg="Docker daemon" commit=f9522e5 containerd-snapshotter=false storage-driver=overlay2 version=27.1.2
	Aug 16 17:03:57 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:57.404864935Z" level=info msg="Daemon has completed initialization"
	Aug 16 17:03:57 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:57.430646618Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 16 17:03:57 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:57.430810646Z" level=info msg="API listen on [::]:2376"
	Aug 16 17:03:57 ha-286000-m03 systemd[1]: Started Docker Application Container Engine.
	Aug 16 17:03:58 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:58.488220464Z" level=info msg="Processing signal 'terminated'"
	Aug 16 17:03:58 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:58.489144852Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 16 17:03:58 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:58.489473398Z" level=info msg="Daemon shutdown complete"
	Aug 16 17:03:58 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:58.489515121Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 16 17:03:58 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:58.489527809Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 16 17:03:58 ha-286000-m03 systemd[1]: Stopping Docker Application Container Engine...
	Aug 16 17:03:59 ha-286000-m03 systemd[1]: docker.service: Deactivated successfully.
	Aug 16 17:03:59 ha-286000-m03 systemd[1]: Stopped Docker Application Container Engine.
	Aug 16 17:03:59 ha-286000-m03 systemd[1]: Starting Docker Application Container Engine...
	Aug 16 17:03:59 ha-286000-m03 dockerd[913]: time="2024-08-16T17:03:59.520198872Z" level=info msg="Starting up"
	Aug 16 17:04:59 ha-286000-m03 dockerd[913]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 16 17:04:59 ha-286000-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 16 17:04:59 ha-286000-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 16 17:04:59 ha-286000-m03 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0816 10:04:59.474111    3758 out.go:270] * 
	W0816 10:04:59.474938    3758 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 10:04:59.558192    3758 out.go:201] 
	
	
	==> Docker <==
	Aug 16 17:02:50 ha-286000 dockerd[1241]: time="2024-08-16T17:02:50.631803753Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:02:50 ha-286000 dockerd[1241]: time="2024-08-16T17:02:50.636334140Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 16 17:02:50 ha-286000 dockerd[1241]: time="2024-08-16T17:02:50.636542015Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 16 17:02:50 ha-286000 dockerd[1241]: time="2024-08-16T17:02:50.636768594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:02:50 ha-286000 dockerd[1241]: time="2024-08-16T17:02:50.637206626Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:02:50 ha-286000 cri-dockerd[1134]: time="2024-08-16T17:02:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fbd84fb813c9034ce56be933a9dc0c8539c5c831abbd163996da762065f0c208/resolv.conf as [nameserver 192.169.0.1]"
	Aug 16 17:02:50 ha-286000 cri-dockerd[1134]: time="2024-08-16T17:02:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/452942e267927b3a15327ece33bffe6fb305db22e6a72ff9b65d4acfe89f3891/resolv.conf as [nameserver 192.169.0.1]"
	Aug 16 17:02:50 ha-286000 dockerd[1241]: time="2024-08-16T17:02:50.842515621Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 16 17:02:50 ha-286000 dockerd[1241]: time="2024-08-16T17:02:50.842901741Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 16 17:02:50 ha-286000 dockerd[1241]: time="2024-08-16T17:02:50.843146100Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:02:50 ha-286000 dockerd[1241]: time="2024-08-16T17:02:50.843415719Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:02:50 ha-286000 dockerd[1241]: time="2024-08-16T17:02:50.885181891Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 16 17:02:50 ha-286000 dockerd[1241]: time="2024-08-16T17:02:50.885227629Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 16 17:02:50 ha-286000 dockerd[1241]: time="2024-08-16T17:02:50.885240492Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:02:50 ha-286000 dockerd[1241]: time="2024-08-16T17:02:50.885438543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:05:03 ha-286000 dockerd[1241]: time="2024-08-16T17:05:03.188642191Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 16 17:05:03 ha-286000 dockerd[1241]: time="2024-08-16T17:05:03.188710762Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 16 17:05:03 ha-286000 dockerd[1241]: time="2024-08-16T17:05:03.188723920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:05:03 ha-286000 dockerd[1241]: time="2024-08-16T17:05:03.188799320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:05:03 ha-286000 cri-dockerd[1134]: time="2024-08-16T17:05:03Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1873ade92edb9d51940849fdee8cb6db41b03368956580ec6099a918aff580e1/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Aug 16 17:05:04 ha-286000 cri-dockerd[1134]: time="2024-08-16T17:05:04Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Aug 16 17:05:04 ha-286000 dockerd[1241]: time="2024-08-16T17:05:04.522783748Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 16 17:05:04 ha-286000 dockerd[1241]: time="2024-08-16T17:05:04.522878436Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 16 17:05:04 ha-286000 dockerd[1241]: time="2024-08-16T17:05:04.522904596Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:05:04 ha-286000 dockerd[1241]: time="2024-08-16T17:05:04.523003751Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5bbe7a8cc492e       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   12 minutes ago      Running             busybox                   0                   1873ade92edb9       busybox-7dff88458-dvmvk
	bcd7170b050a5       cbb01a7bd410d                                                                                         15 minutes ago      Running             coredns                   0                   452942e267927       coredns-6f6b679f8f-rfbz7
	60d3d03e297ce       cbb01a7bd410d                                                                                         15 minutes ago      Running             coredns                   0                   fbd84fb813c90       coredns-6f6b679f8f-2kqjf
	f55b59f53c6eb       6e38f40d628db                                                                                         15 minutes ago      Running             storage-provisioner       0                   482990a4b00e6       storage-provisioner
	2677394dcd66f       kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166              15 minutes ago      Running             kindnet-cni               0                   afaf657e85c32       kindnet-whqxb
	81f6c96d46494       ad83b2ca7b09e                                                                                         15 minutes ago      Running             kube-proxy                0                   dfb822fc27b5e       kube-proxy-w4nt2
	cafa34c562392       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     15 minutes ago      Running             kube-vip                  0                   69fba128b04a6       kube-vip-ha-286000
	8f5867ee99d9b       045733566833c                                                                                         15 minutes ago      Running             kube-controller-manager   0                   e87fd3e77384f       kube-controller-manager-ha-286000
	f7b2e9efdd94f       1766f54c897f0                                                                                         15 minutes ago      Running             kube-scheduler            0                   8e2b186980aff       kube-scheduler-ha-286000
	d8dadff6cec78       2e96e5913fc06                                                                                         15 minutes ago      Running             etcd                      0                   cdb14ff7d8896       etcd-ha-286000
	5f3adc202a7a2       604f5db92eaa8                                                                                         15 minutes ago      Running             kube-apiserver            0                   818ee6dafe6c9       kube-apiserver-ha-286000
	
	
	==> coredns [60d3d03e297c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:37081 - 62351 "HINFO IN 7437422972060865489.3041931607585121070. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.009759141s
	[INFO] 10.244.0.4:34542 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.00094119s
	[INFO] 10.244.0.4:38912 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 0.000921826s
	[INFO] 10.244.1.2:39585 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000070042s
	[INFO] 10.244.0.4:39673 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000101817s
	[INFO] 10.244.0.4:55820 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000225412s
	[INFO] 10.244.1.2:48427 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135793s
	[INFO] 10.244.1.2:33204 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.000791531s
	[INFO] 10.244.1.2:51238 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000081236s
	[INFO] 10.244.1.2:42705 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000135743s
	[INFO] 10.244.1.2:33254 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000106295s
	[INFO] 10.244.0.4:53900 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000067988s
	[INFO] 10.244.1.2:48994 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00006771s
	[INFO] 10.244.1.2:56734 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000138883s
	[INFO] 10.244.0.4:53039 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000056881s
	[INFO] 10.244.0.4:47474 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000052458s
	[INFO] 10.244.1.2:35027 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000113257s
	[INFO] 10.244.1.2:60680 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000087182s
	[INFO] 10.244.1.2:36287 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000080566s
	
	
	==> coredns [bcd7170b050a] <==
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:47813 - 35687 "HINFO IN 8431179596625010309.1478595720938230641. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.009077429s
	[INFO] 10.244.0.4:47786 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132847s
	[INFO] 10.244.0.4:40096 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.001354873s
	[INFO] 10.244.1.2:40884 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000096214s
	[INFO] 10.244.1.2:55655 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000050276s
	[INFO] 10.244.1.2:47690 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000614458s
	[INFO] 10.244.0.4:45344 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000076791s
	[INFO] 10.244.0.4:53101 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.001042655s
	[INFO] 10.244.0.4:43889 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000082483s
	[INFO] 10.244.0.4:59210 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.00074308s
	[INFO] 10.244.0.4:38429 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00010233s
	[INFO] 10.244.0.4:48679 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00007214s
	[INFO] 10.244.1.2:43879 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000062278s
	[INFO] 10.244.1.2:45902 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000100209s
	[INFO] 10.244.1.2:39740 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000071593s
	[INFO] 10.244.0.4:50878 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000077864s
	[INFO] 10.244.0.4:51260 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000078903s
	[INFO] 10.244.0.4:38206 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000049117s
	[INFO] 10.244.1.2:54952 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000071797s
	[INFO] 10.244.1.2:36478 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000063757s
	[INFO] 10.244.0.4:43240 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000066616s
	[INFO] 10.244.0.4:60894 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000052873s
	[INFO] 10.244.1.2:43932 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00008816s
	
	
	==> describe nodes <==
	Name:               ha-286000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-286000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd
	                    minikube.k8s.io/name=ha-286000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_16T10_02_26_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 17:02:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-286000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 17:17:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 17:15:42 +0000   Fri, 16 Aug 2024 17:02:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 17:15:42 +0000   Fri, 16 Aug 2024 17:02:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 17:15:42 +0000   Fri, 16 Aug 2024 17:02:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 17:15:42 +0000   Fri, 16 Aug 2024 17:02:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.5
	  Hostname:    ha-286000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 a56c711da9894c3c86f2a7af9fec2b53
	  System UUID:                ad96408c-0000-0000-89eb-d74e5b68d297
	  Boot ID:                    23e4d4eb-a603-45bf-aca4-fb0893407f5a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-dvmvk              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 coredns-6f6b679f8f-2kqjf             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     15m
	  kube-system                 coredns-6f6b679f8f-rfbz7             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     15m
	  kube-system                 etcd-ha-286000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         15m
	  kube-system                 kindnet-whqxb                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      15m
	  kube-system                 kube-apiserver-ha-286000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-ha-286000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-w4nt2                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-ha-286000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-vip-ha-286000                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15m                kube-proxy       
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node ha-286000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node ha-286000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node ha-286000 status is now: NodeHasSufficientPID
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m                kubelet          Node ha-286000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m                kubelet          Node ha-286000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m                kubelet          Node ha-286000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m                node-controller  Node ha-286000 event: Registered Node ha-286000 in Controller
	  Normal  NodeReady                15m                kubelet          Node ha-286000 status is now: NodeReady
	  Normal  RegisteredNode           14m                node-controller  Node ha-286000 event: Registered Node ha-286000 in Controller
	
	
	Name:               ha-286000-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-286000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd
	                    minikube.k8s.io/name=ha-286000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_16T10_03_22_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 17:03:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-286000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 17:17:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 17:15:35 +0000   Fri, 16 Aug 2024 17:03:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 17:15:35 +0000   Fri, 16 Aug 2024 17:03:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 17:15:35 +0000   Fri, 16 Aug 2024 17:03:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 17:15:35 +0000   Fri, 16 Aug 2024 17:03:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.6
	  Hostname:    ha-286000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 42bf4a1b451f44ad925f50a6a94e4cff
	  System UUID:                f7634935-0000-0000-a751-ac72999da031
	  Boot ID:                    e82d142e-37b9-4938-81bc-f5bc1a2db23f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-k9m92                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 etcd-ha-286000-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         14m
	  kube-system                 kindnet-t9kjf                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      14m
	  kube-system                 kube-apiserver-ha-286000-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-ha-286000-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-pt669                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-ha-286000-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-vip-ha-286000-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14m                kube-proxy       
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)  kubelet          Node ha-286000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet          Node ha-286000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)  kubelet          Node ha-286000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           14m                node-controller  Node ha-286000-m02 event: Registered Node ha-286000-m02 in Controller
	  Normal  RegisteredNode           14m                node-controller  Node ha-286000-m02 event: Registered Node ha-286000-m02 in Controller
	
	
	Name:               ha-286000-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-286000-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd
	                    minikube.k8s.io/name=ha-286000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_16T10_17_22_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 17:17:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-286000-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 17:17:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 17:17:52 +0000   Fri, 16 Aug 2024 17:17:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 17:17:52 +0000   Fri, 16 Aug 2024 17:17:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 17:17:52 +0000   Fri, 16 Aug 2024 17:17:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 17:17:52 +0000   Fri, 16 Aug 2024 17:17:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.8
	  Hostname:    ha-286000-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 324c7ca05f77443abc4e861a3d5a5224
	  System UUID:                9a6645c6-0000-0000-8cbd-49b6a6a0383b
	  Boot ID:                    839ab079-775d-4939-ac8e-9fb255ba29df
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-99xmp    0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kindnet-b9r6s              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      33s
	  kube-system                 kube-proxy-5qhgk           0 (0%)        0 (0%)      0 (0%)           0 (0%)         33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 26s                kube-proxy       
	  Normal  NodeHasSufficientMemory  34s (x2 over 34s)  kubelet          Node ha-286000-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    34s (x2 over 34s)  kubelet          Node ha-286000-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     34s (x2 over 34s)  kubelet          Node ha-286000-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  34s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           31s                node-controller  Node ha-286000-m04 event: Registered Node ha-286000-m04 in Controller
	  Normal  RegisteredNode           30s                node-controller  Node ha-286000-m04 event: Registered Node ha-286000-m04 in Controller
	  Normal  NodeReady                11s                kubelet          Node ha-286000-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.006640] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.760553] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +1.351722] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000003] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Aug16 17:02] systemd-fstab-generator[519]: Ignoring "noauto" option for root device
	[  +0.114039] systemd-fstab-generator[531]: Ignoring "noauto" option for root device
	[  +1.207693] kauditd_printk_skb: 42 callbacks suppressed
	[  +0.611653] systemd-fstab-generator[786]: Ignoring "noauto" option for root device
	[  +0.265175] systemd-fstab-generator[846]: Ignoring "noauto" option for root device
	[  +0.101006] systemd-fstab-generator[858]: Ignoring "noauto" option for root device
	[  +0.123308] systemd-fstab-generator[872]: Ignoring "noauto" option for root device
	[  +2.497195] systemd-fstab-generator[1086]: Ignoring "noauto" option for root device
	[  +0.110299] systemd-fstab-generator[1098]: Ignoring "noauto" option for root device
	[  +0.095670] systemd-fstab-generator[1110]: Ignoring "noauto" option for root device
	[  +0.122299] systemd-fstab-generator[1126]: Ignoring "noauto" option for root device
	[  +3.636333] systemd-fstab-generator[1227]: Ignoring "noauto" option for root device
	[  +0.056190] kauditd_printk_skb: 233 callbacks suppressed
	[  +2.505796] systemd-fstab-generator[1479]: Ignoring "noauto" option for root device
	[  +3.634449] systemd-fstab-generator[1611]: Ignoring "noauto" option for root device
	[  +0.055756] kauditd_printk_skb: 70 callbacks suppressed
	[  +6.910973] systemd-fstab-generator[2107]: Ignoring "noauto" option for root device
	[  +0.079351] kauditd_printk_skb: 72 callbacks suppressed
	[  +9.264773] kauditd_printk_skb: 51 callbacks suppressed
	[Aug16 17:03] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [d8dadff6cec7] <==
	{"level":"info","ts":"2024-08-16T17:03:20.602718Z","caller":"rafthttp/transport.go:317","msg":"added remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"9633c02797b6d34","remote-peer-urls":["https://192.169.0.6:2380"]}
	{"level":"info","ts":"2024-08-16T17:03:20.606154Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"9633c02797b6d34"}
	{"level":"info","ts":"2024-08-16T17:03:20.606173Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"9633c02797b6d34"}
	{"level":"info","ts":"2024-08-16T17:03:20.606385Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"9633c02797b6d34"}
	{"level":"info","ts":"2024-08-16T17:03:20.607792Z","caller":"etcdserver/server.go:1996","msg":"applied a configuration change through raft","local-member-id":"b8c6c7563d17d844","raft-conf-change":"ConfChangeAddLearnerNode","raft-conf-change-node-id":"9633c02797b6d34"}
	{"level":"info","ts":"2024-08-16T17:03:21.553074Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"9633c02797b6d34"}
	{"level":"info","ts":"2024-08-16T17:03:21.553169Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"9633c02797b6d34"}
	{"level":"info","ts":"2024-08-16T17:03:21.561367Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"9633c02797b6d34","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-08-16T17:03:21.561449Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"9633c02797b6d34"}
	{"level":"info","ts":"2024-08-16T17:03:21.561528Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"9633c02797b6d34"}
	{"level":"info","ts":"2024-08-16T17:03:21.571776Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"9633c02797b6d34","stream-type":"stream Message"}
	{"level":"info","ts":"2024-08-16T17:03:21.571882Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"9633c02797b6d34"}
	{"level":"info","ts":"2024-08-16T17:03:22.121813Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 switched to configuration voters=(676450350361439540 13314548521573537860)"}
	{"level":"info","ts":"2024-08-16T17:03:22.121979Z","caller":"membership/cluster.go:535","msg":"promote member","cluster-id":"b73189effde9bc63","local-member-id":"b8c6c7563d17d844"}
	{"level":"info","ts":"2024-08-16T17:03:22.122011Z","caller":"etcdserver/server.go:1996","msg":"applied a configuration change through raft","local-member-id":"b8c6c7563d17d844","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"9633c02797b6d34"}
	{"level":"warn","ts":"2024-08-16T17:05:03.077229Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.58419ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets/default/busybox-7dff88458\" ","response":"range_response_count:1 size:1985"}
	{"level":"info","ts":"2024-08-16T17:05:03.077410Z","caller":"traceutil/trace.go:171","msg":"trace[51649115] range","detail":"{range_begin:/registry/replicasets/default/busybox-7dff88458; range_end:; response_count:1; response_revision:920; }","duration":"112.799262ms","start":"2024-08-16T17:05:02.964599Z","end":"2024-08-16T17:05:03.077398Z","steps":["trace[51649115] 'agreement among raft nodes before linearized reading'  (duration: 72.831989ms)","trace[51649115] 'range keys from in-memory index tree'  (duration: 39.723026ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-16T17:05:03.078503Z","caller":"traceutil/trace.go:171","msg":"trace[1276232025] transaction","detail":"{read_only:false; response_revision:921; number_of_response:1; }","duration":"116.265354ms","start":"2024-08-16T17:05:02.962223Z","end":"2024-08-16T17:05:03.078489Z","steps":["trace[1276232025] 'process raft request'  (duration: 75.236605ms)","trace[1276232025] 'compare'  (duration: 39.643768ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-16T17:12:20.455094Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1242}
	{"level":"info","ts":"2024-08-16T17:12:20.478439Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1242,"took":"22.921055ms","hash":729855672,"current-db-size-bytes":3137536,"current-db-size":"3.1 MB","current-db-size-in-use-bytes":1589248,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-08-16T17:12:20.478900Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":729855672,"revision":1242,"compact-revision":-1}
	{"level":"info","ts":"2024-08-16T17:17:20.461022Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1872}
	{"level":"info","ts":"2024-08-16T17:17:20.477338Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1872,"took":"15.709672ms","hash":3229853735,"current-db-size-bytes":3137536,"current-db-size":"3.1 MB","current-db-size-in-use-bytes":1396736,"current-db-size-in-use":"1.4 MB"}
	{"level":"info","ts":"2024-08-16T17:17:20.478748Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3229853735,"revision":1872,"compact-revision":1242}
	{"level":"info","ts":"2024-08-16T17:17:50.421877Z","caller":"traceutil/trace.go:171","msg":"trace[2006150280] transaction","detail":"{read_only:false; response_revision:2663; number_of_response:1; }","duration":"106.416528ms","start":"2024-08-16T17:17:50.315444Z","end":"2024-08-16T17:17:50.421861Z","steps":["trace[2006150280] 'process raft request'  (duration: 106.325161ms)"],"step_count":1}
	
	
	==> kernel <==
	 17:17:54 up 16 min,  0 users,  load average: 0.69, 0.32, 0.19
	Linux ha-286000 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [2677394dcd66] <==
	I0816 17:17:05.222810       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0816 17:17:05.222824       1 main.go:322] Node ha-286000-m02 has CIDR [10.244.1.0/24] 
	I0816 17:17:15.231227       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0816 17:17:15.231402       1 main.go:322] Node ha-286000-m02 has CIDR [10.244.1.0/24] 
	I0816 17:17:15.231537       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0816 17:17:15.231677       1 main.go:299] handling current node
	I0816 17:17:25.222718       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0816 17:17:25.223104       1 main.go:322] Node ha-286000-m04 has CIDR [10.244.2.0/24] 
	I0816 17:17:25.223665       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 192.169.0.8 Flags: [] Table: 0} 
	I0816 17:17:25.224064       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0816 17:17:25.224221       1 main.go:299] handling current node
	I0816 17:17:25.224408       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0816 17:17:25.224648       1 main.go:322] Node ha-286000-m02 has CIDR [10.244.1.0/24] 
	I0816 17:17:35.223909       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0816 17:17:35.224305       1 main.go:299] handling current node
	I0816 17:17:35.224566       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0816 17:17:35.224845       1 main.go:322] Node ha-286000-m02 has CIDR [10.244.1.0/24] 
	I0816 17:17:35.225222       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0816 17:17:35.225388       1 main.go:322] Node ha-286000-m04 has CIDR [10.244.2.0/24] 
	I0816 17:17:45.223708       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0816 17:17:45.223959       1 main.go:322] Node ha-286000-m04 has CIDR [10.244.2.0/24] 
	I0816 17:17:45.224161       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0816 17:17:45.224326       1 main.go:299] handling current node
	I0816 17:17:45.224456       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0816 17:17:45.224531       1 main.go:322] Node ha-286000-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [5f3adc202a7a] <==
	I0816 17:02:23.102546       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0816 17:02:23.105482       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0816 17:02:23.105513       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0816 17:02:23.438732       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0816 17:02:23.465745       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0816 17:02:23.506778       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0816 17:02:23.510486       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.5]
	I0816 17:02:23.511071       1 controller.go:615] quota admission added evaluator for: endpoints
	I0816 17:02:23.513769       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0816 17:02:24.114339       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0816 17:02:25.748425       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0816 17:02:25.762462       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0816 17:02:25.770706       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0816 17:02:29.466806       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0816 17:02:29.715365       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0816 17:16:53.377658       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:51358: use of closed network connection
	E0816 17:16:53.575639       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:51360: use of closed network connection
	E0816 17:16:53.910337       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:51365: use of closed network connection
	E0816 17:16:54.106751       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:51367: use of closed network connection
	E0816 17:16:54.415124       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:51372: use of closed network connection
	E0816 17:16:54.604288       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:51374: use of closed network connection
	E0816 17:16:57.856178       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:51405: use of closed network connection
	E0816 17:16:58.061897       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:51407: use of closed network connection
	E0816 17:16:58.252131       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:51409: use of closed network connection
	E0816 17:16:58.440772       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:51411: use of closed network connection
	
	
	==> kube-controller-manager [8f5867ee99d9] <==
	I0816 17:15:35.199030       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000-m02"
	I0816 17:15:42.092912       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000"
	I0816 17:17:21.952833       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-286000-m04\" does not exist"
	I0816 17:17:21.970197       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-286000-m04" podCIDRs=["10.244.2.0/24"]
	I0816 17:17:21.970254       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000-m04"
	I0816 17:17:21.970272       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000-m04"
	I0816 17:17:21.970425       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000-m04"
	I0816 17:17:22.057762       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="38.461µs"
	I0816 17:17:22.062866       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000-m04"
	I0816 17:17:22.339566       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000-m04"
	I0816 17:17:23.372821       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000-m04"
	I0816 17:17:24.020984       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000-m04"
	I0816 17:17:24.021820       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-286000-m04"
	I0816 17:17:24.091142       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000-m04"
	I0816 17:17:32.146156       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000-m04"
	I0816 17:17:44.596616       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000-m04"
	I0816 17:17:44.599585       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-286000-m04"
	I0816 17:17:44.604245       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000-m04"
	I0816 17:17:44.612723       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="57.66µs"
	I0816 17:17:44.622735       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="33.854µs"
	I0816 17:17:44.628514       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="41.171µs"
	I0816 17:17:46.710170       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="4.809826ms"
	I0816 17:17:46.710765       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="32.942µs"
	I0816 17:17:48.294811       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000-m04"
	I0816 17:17:52.608525       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000-m04"
	
	
	==> kube-proxy [81f6c96d4649] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0816 17:02:30.214569       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0816 17:02:30.222978       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E0816 17:02:30.223011       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0816 17:02:30.267797       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0816 17:02:30.267825       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0816 17:02:30.267843       1 server_linux.go:169] "Using iptables Proxier"
	I0816 17:02:30.270154       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0816 17:02:30.270311       1 server.go:483] "Version info" version="v1.31.0"
	I0816 17:02:30.270319       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 17:02:30.271352       1 config.go:197] "Starting service config controller"
	I0816 17:02:30.271358       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0816 17:02:30.271372       1 config.go:104] "Starting endpoint slice config controller"
	I0816 17:02:30.271375       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0816 17:02:30.271598       1 config.go:326] "Starting node config controller"
	I0816 17:02:30.271603       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0816 17:02:30.371824       1 shared_informer.go:320] Caches are synced for node config
	I0816 17:02:30.371859       1 shared_informer.go:320] Caches are synced for service config
	I0816 17:02:30.371867       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [f7b2e9efdd94] <==
	W0816 17:02:22.176847       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0816 17:02:22.176852       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 17:02:22.176967       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0816 17:02:22.176999       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0816 17:02:22.177013       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0816 17:02:22.179079       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0816 17:02:22.179253       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0816 17:02:22.179322       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0816 17:02:23.072963       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0816 17:02:23.073256       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 17:02:23.081000       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0816 17:02:23.081176       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 17:02:23.142220       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0816 17:02:23.142263       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 17:02:23.166962       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0816 17:02:23.167003       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 17:02:23.255186       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 17:02:23.255230       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 17:02:23.456273       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0816 17:02:23.456447       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0816 17:02:26.460413       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0816 17:05:02.827326       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-k9m92\": pod busybox-7dff88458-k9m92 is already assigned to node \"ha-286000-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-k9m92" node="ha-286000-m02"
	E0816 17:05:02.827387       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod a516859c-0da8-4ba9-a896-ac720495818f(default/busybox-7dff88458-k9m92) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-k9m92"
	E0816 17:05:02.827403       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-k9m92\": pod busybox-7dff88458-k9m92 is already assigned to node \"ha-286000-m02\"" pod="default/busybox-7dff88458-k9m92"
	I0816 17:05:02.827520       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-k9m92" node="ha-286000-m02"
	
	
	==> kubelet <==
	Aug 16 17:13:25 ha-286000 kubelet[2114]: E0816 17:13:25.672677    2114 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 16 17:13:25 ha-286000 kubelet[2114]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 16 17:13:25 ha-286000 kubelet[2114]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 16 17:13:25 ha-286000 kubelet[2114]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 16 17:13:25 ha-286000 kubelet[2114]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 16 17:14:25 ha-286000 kubelet[2114]: E0816 17:14:25.670434    2114 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 16 17:14:25 ha-286000 kubelet[2114]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 16 17:14:25 ha-286000 kubelet[2114]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 16 17:14:25 ha-286000 kubelet[2114]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 16 17:14:25 ha-286000 kubelet[2114]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 16 17:15:25 ha-286000 kubelet[2114]: E0816 17:15:25.670848    2114 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 16 17:15:25 ha-286000 kubelet[2114]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 16 17:15:25 ha-286000 kubelet[2114]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 16 17:15:25 ha-286000 kubelet[2114]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 16 17:15:25 ha-286000 kubelet[2114]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 16 17:16:25 ha-286000 kubelet[2114]: E0816 17:16:25.672451    2114 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 16 17:16:25 ha-286000 kubelet[2114]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 16 17:16:25 ha-286000 kubelet[2114]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 16 17:16:25 ha-286000 kubelet[2114]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 16 17:16:25 ha-286000 kubelet[2114]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 16 17:17:25 ha-286000 kubelet[2114]: E0816 17:17:25.670075    2114 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 16 17:17:25 ha-286000 kubelet[2114]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 16 17:17:25 ha-286000 kubelet[2114]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 16 17:17:25 ha-286000 kubelet[2114]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 16 17:17:25 ha-286000 kubelet[2114]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-286000 -n ha-286000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-286000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/CopyFile FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/CopyFile (3.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (74.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-amd64 -p ha-286000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-darwin-amd64 -p ha-286000 node stop m02 -v=7 --alsologtostderr: (8.321432978s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-amd64 -p ha-286000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-286000 status -v=7 --alsologtostderr: exit status 7 (17.896873378s)

                                                
                                                
-- stdout --
	ha-286000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-286000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-286000-m03
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-286000-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 10:18:04.095660    4334 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:18:04.095979    4334 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:18:04.095984    4334 out.go:358] Setting ErrFile to fd 2...
	I0816 10:18:04.095988    4334 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:18:04.096179    4334 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1276/.minikube/bin
	I0816 10:18:04.096363    4334 out.go:352] Setting JSON to false
	I0816 10:18:04.096384    4334 mustload.go:65] Loading cluster: ha-286000
	I0816 10:18:04.096419    4334 notify.go:220] Checking for updates...
	I0816 10:18:04.096699    4334 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:18:04.096715    4334 status.go:255] checking status of ha-286000 ...
	I0816 10:18:04.097097    4334 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:18:04.097147    4334 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:18:04.106668    4334 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51615
	I0816 10:18:04.107105    4334 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:18:04.107566    4334 main.go:141] libmachine: Using API Version  1
	I0816 10:18:04.107581    4334 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:18:04.107820    4334 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:18:04.107943    4334 main.go:141] libmachine: (ha-286000) Calling .GetState
	I0816 10:18:04.108046    4334 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:18:04.108114    4334 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:18:04.109098    4334 status.go:330] ha-286000 host status = "Running" (err=<nil>)
	I0816 10:18:04.109118    4334 host.go:66] Checking if "ha-286000" exists ...
	I0816 10:18:04.109368    4334 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:18:04.109394    4334 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:18:04.118288    4334 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51617
	I0816 10:18:04.118781    4334 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:18:04.119108    4334 main.go:141] libmachine: Using API Version  1
	I0816 10:18:04.119121    4334 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:18:04.119322    4334 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:18:04.119419    4334 main.go:141] libmachine: (ha-286000) Calling .GetIP
	I0816 10:18:04.119506    4334 host.go:66] Checking if "ha-286000" exists ...
	I0816 10:18:04.119817    4334 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:18:04.119840    4334 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:18:04.129254    4334 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51619
	I0816 10:18:04.129607    4334 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:18:04.129943    4334 main.go:141] libmachine: Using API Version  1
	I0816 10:18:04.129953    4334 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:18:04.130162    4334 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:18:04.130285    4334 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:18:04.130431    4334 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 10:18:04.130451    4334 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:18:04.130568    4334 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:18:04.130656    4334 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:18:04.130740    4334 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:18:04.130831    4334 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:18:04.168233    4334 ssh_runner.go:195] Run: systemctl --version
	I0816 10:18:04.175820    4334 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 10:18:04.192724    4334 kubeconfig.go:125] found "ha-286000" server: "https://192.169.0.254:8443"
	I0816 10:18:04.192748    4334 api_server.go:166] Checking apiserver status ...
	I0816 10:18:04.192813    4334 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 10:18:04.209930    4334 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1897/cgroup
	W0816 10:18:04.220378    4334 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1897/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0816 10:18:04.220452    4334 ssh_runner.go:195] Run: ls
	I0816 10:18:04.224038    4334 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0816 10:18:06.240797    4334 api_server.go:279] https://192.169.0.254:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 10:18:06.240838    4334 retry.go:31] will retry after 237.924938ms: https://192.169.0.254:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 10:18:06.479598    4334 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0816 10:18:11.480805    4334 api_server.go:269] stopped: https://192.169.0.254:8443/healthz: Get "https://192.169.0.254:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:18:11.480831    4334 retry.go:31] will retry after 254.205812ms: state is "Stopped"
	I0816 10:18:11.735203    4334 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0816 10:18:14.842163    4334 api_server.go:269] stopped: https://192.169.0.254:8443/healthz: Get "https://192.169.0.254:8443/healthz": dial tcp 192.169.0.254:8443: connect: network is unreachable
	I0816 10:18:14.842201    4334 retry.go:31] will retry after 301.875144ms: state is "Stopped"
	I0816 10:18:15.144324    4334 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0816 10:18:18.233770    4334 api_server.go:269] stopped: https://192.169.0.254:8443/healthz: Get "https://192.169.0.254:8443/healthz": dial tcp 192.169.0.254:8443: connect: network is unreachable
	I0816 10:18:18.233801    4334 retry.go:31] will retry after 450.499143ms: state is "Stopped"
	I0816 10:18:18.685923    4334 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0816 10:18:21.753321    4334 api_server.go:269] stopped: https://192.169.0.254:8443/healthz: Get "https://192.169.0.254:8443/healthz": dial tcp 192.169.0.254:8443: connect: network is unreachable
	I0816 10:18:21.753352    4334 status.go:422] ha-286000 apiserver status = Running (err=<nil>)
	I0816 10:18:21.753365    4334 status.go:257] ha-286000 status: &{Name:ha-286000 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 10:18:21.753378    4334 status.go:255] checking status of ha-286000-m02 ...
	I0816 10:18:21.753663    4334 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:18:21.753686    4334 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:18:21.762623    4334 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51627
	I0816 10:18:21.763000    4334 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:18:21.763387    4334 main.go:141] libmachine: Using API Version  1
	I0816 10:18:21.763405    4334 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:18:21.763639    4334 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:18:21.763739    4334 main.go:141] libmachine: (ha-286000-m02) Calling .GetState
	I0816 10:18:21.763809    4334 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:18:21.763900    4334 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid from json: 3806
	I0816 10:18:21.764826    4334 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid 3806 missing from process table
	I0816 10:18:21.764866    4334 status.go:330] ha-286000-m02 host status = "Stopped" (err=<nil>)
	I0816 10:18:21.764873    4334 status.go:343] host is not running, skipping remaining checks
	I0816 10:18:21.764879    4334 status.go:257] ha-286000-m02 status: &{Name:ha-286000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 10:18:21.764894    4334 status.go:255] checking status of ha-286000-m03 ...
	I0816 10:18:21.765143    4334 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:18:21.765167    4334 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:18:21.774148    4334 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51629
	I0816 10:18:21.774505    4334 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:18:21.774852    4334 main.go:141] libmachine: Using API Version  1
	I0816 10:18:21.774866    4334 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:18:21.775071    4334 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:18:21.775163    4334 main.go:141] libmachine: (ha-286000-m03) Calling .GetState
	I0816 10:18:21.775241    4334 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:18:21.775341    4334 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid from json: 3849
	I0816 10:18:21.776312    4334 status.go:330] ha-286000-m03 host status = "Running" (err=<nil>)
	I0816 10:18:21.776321    4334 host.go:66] Checking if "ha-286000-m03" exists ...
	I0816 10:18:21.776589    4334 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:18:21.776612    4334 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:18:21.785609    4334 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51631
	I0816 10:18:21.785961    4334 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:18:21.786302    4334 main.go:141] libmachine: Using API Version  1
	I0816 10:18:21.786318    4334 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:18:21.786542    4334 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:18:21.786670    4334 main.go:141] libmachine: (ha-286000-m03) Calling .GetIP
	I0816 10:18:21.786758    4334 host.go:66] Checking if "ha-286000-m03" exists ...
	I0816 10:18:21.787019    4334 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:18:21.787038    4334 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:18:21.795993    4334 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51633
	I0816 10:18:21.796364    4334 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:18:21.796682    4334 main.go:141] libmachine: Using API Version  1
	I0816 10:18:21.796690    4334 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:18:21.796900    4334 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:18:21.797009    4334 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:18:21.797165    4334 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 10:18:21.797185    4334 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:18:21.797279    4334 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:18:21.797368    4334 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:18:21.797458    4334 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:18:21.797547    4334 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/id_rsa Username:docker}
	I0816 10:18:21.833639    4334 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 10:18:21.845516    4334 kubeconfig.go:125] found "ha-286000" server: "https://192.169.0.254:8443"
	I0816 10:18:21.845531    4334 api_server.go:166] Checking apiserver status ...
	I0816 10:18:21.845571    4334 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 10:18:21.856289    4334 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 10:18:21.856301    4334 status.go:422] ha-286000-m03 apiserver status = Stopped (err=<nil>)
	I0816 10:18:21.856313    4334 status.go:257] ha-286000-m03 status: &{Name:ha-286000-m03 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 10:18:21.856324    4334 status.go:255] checking status of ha-286000-m04 ...
	I0816 10:18:21.856596    4334 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:18:21.856618    4334 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:18:21.865711    4334 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51636
	I0816 10:18:21.866067    4334 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:18:21.866445    4334 main.go:141] libmachine: Using API Version  1
	I0816 10:18:21.866470    4334 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:18:21.866679    4334 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:18:21.866805    4334 main.go:141] libmachine: (ha-286000-m04) Calling .GetState
	I0816 10:18:21.866895    4334 main.go:141] libmachine: (ha-286000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:18:21.866985    4334 main.go:141] libmachine: (ha-286000-m04) DBG | hyperkit pid from json: 4248
	I0816 10:18:21.867954    4334 status.go:330] ha-286000-m04 host status = "Running" (err=<nil>)
	I0816 10:18:21.867964    4334 host.go:66] Checking if "ha-286000-m04" exists ...
	I0816 10:18:21.868224    4334 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:18:21.868243    4334 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:18:21.877097    4334 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51638
	I0816 10:18:21.877482    4334 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:18:21.877847    4334 main.go:141] libmachine: Using API Version  1
	I0816 10:18:21.877864    4334 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:18:21.878079    4334 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:18:21.878194    4334 main.go:141] libmachine: (ha-286000-m04) Calling .GetIP
	I0816 10:18:21.878283    4334 host.go:66] Checking if "ha-286000-m04" exists ...
	I0816 10:18:21.878553    4334 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:18:21.878575    4334 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:18:21.887464    4334 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51640
	I0816 10:18:21.887826    4334 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:18:21.888169    4334 main.go:141] libmachine: Using API Version  1
	I0816 10:18:21.888185    4334 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:18:21.888404    4334 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:18:21.888514    4334 main.go:141] libmachine: (ha-286000-m04) Calling .DriverName
	I0816 10:18:21.888668    4334 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 10:18:21.888679    4334 main.go:141] libmachine: (ha-286000-m04) Calling .GetSSHHostname
	I0816 10:18:21.888760    4334 main.go:141] libmachine: (ha-286000-m04) Calling .GetSSHPort
	I0816 10:18:21.888854    4334 main.go:141] libmachine: (ha-286000-m04) Calling .GetSSHKeyPath
	I0816 10:18:21.888957    4334 main.go:141] libmachine: (ha-286000-m04) Calling .GetSSHUsername
	I0816 10:18:21.889039    4334 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m04/id_rsa Username:docker}
	I0816 10:18:21.922049    4334 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 10:18:21.933862    4334 status.go:257] ha-286000-m04 status: &{Name:ha-286000-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-amd64 -p ha-286000 status -v=7 --alsologtostderr": ha-286000
type: Control Plane
host: Running
kubelet: Running
apiserver: Stopped
kubeconfig: Configured

                                                
                                                
ha-286000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-286000-m03
type: Control Plane
host: Running
kubelet: Stopped
apiserver: Stopped
kubeconfig: Configured

                                                
                                                
ha-286000-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-amd64 -p ha-286000 status -v=7 --alsologtostderr": ha-286000
type: Control Plane
host: Running
kubelet: Running
apiserver: Stopped
kubeconfig: Configured

                                                
                                                
ha-286000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-286000-m03
type: Control Plane
host: Running
kubelet: Stopped
apiserver: Stopped
kubeconfig: Configured

                                                
                                                
ha-286000-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-286000 -n ha-286000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ha-286000 -n ha-286000: exit status 2 (17.162846294s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-286000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-286000 logs -n 25: (14.349985293s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| kubectl | -p ha-286000 -- get pods -o          | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:15 PDT | 16 Aug 24 10:15 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- get pods -o          | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:15 PDT | 16 Aug 24 10:15 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- get pods -o          | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:15 PDT | 16 Aug 24 10:15 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- get pods -o          | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:15 PDT | 16 Aug 24 10:15 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- get pods -o          | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:15 PDT | 16 Aug 24 10:15 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- get pods -o          | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- get pods -o          | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- get pods -o          | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT |                     |
	|         | busybox-7dff88458-99xmp --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-dvmvk --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-k9m92 --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT |                     |
	|         | busybox-7dff88458-99xmp --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-dvmvk --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-k9m92 --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT |                     |
	|         | busybox-7dff88458-99xmp -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-dvmvk -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-k9m92 -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- get pods -o          | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT |                     |
	|         | busybox-7dff88458-99xmp              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-dvmvk              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-dvmvk -- sh        |           |         |         |                     |                     |
	|         | -c ping -c 1 192.169.0.1             |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-k9m92              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-k9m92 -- sh        |           |         |         |                     |                     |
	|         | -c ping -c 1 192.169.0.1             |           |         |         |                     |                     |
	| node    | add -p ha-286000 -v=7                | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:17 PDT | 16 Aug 24 10:17 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | ha-286000 node stop m02 -v=7         | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:17 PDT | 16 Aug 24 10:18 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 10:01:48
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 10:01:48.113296    3758 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:01:48.113462    3758 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:01:48.113467    3758 out.go:358] Setting ErrFile to fd 2...
	I0816 10:01:48.113470    3758 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:01:48.113648    3758 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1276/.minikube/bin
	I0816 10:01:48.115192    3758 out.go:352] Setting JSON to false
	I0816 10:01:48.140204    3758 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":1878,"bootTime":1723825830,"procs":433,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0816 10:01:48.140316    3758 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 10:01:48.194498    3758 out.go:177] * [ha-286000] minikube v1.33.1 on Darwin 14.6.1
	I0816 10:01:48.236650    3758 notify.go:220] Checking for updates...
	I0816 10:01:48.262360    3758 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 10:01:48.320415    3758 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1276/kubeconfig
	I0816 10:01:48.381368    3758 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0816 10:01:48.452505    3758 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 10:01:48.474398    3758 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1276/.minikube
	I0816 10:01:48.495459    3758 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 10:01:48.520120    3758 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 10:01:48.550379    3758 out.go:177] * Using the hyperkit driver based on user configuration
	I0816 10:01:48.592331    3758 start.go:297] selected driver: hyperkit
	I0816 10:01:48.592358    3758 start.go:901] validating driver "hyperkit" against <nil>
	I0816 10:01:48.592382    3758 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 10:01:48.596757    3758 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:01:48.596907    3758 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19461-1276/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0816 10:01:48.605545    3758 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0816 10:01:48.609748    3758 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:01:48.609772    3758 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0816 10:01:48.609805    3758 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 10:01:48.610046    3758 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 10:01:48.610094    3758 cni.go:84] Creating CNI manager for ""
	I0816 10:01:48.610103    3758 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0816 10:01:48.610108    3758 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0816 10:01:48.610236    3758 start.go:340] cluster config:
	{Name:ha-286000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-286000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docke
r CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0816 10:01:48.610323    3758 iso.go:125] acquiring lock: {Name:mkb430b43b5e7a8a683454482519837ed996c78d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:01:48.632334    3758 out.go:177] * Starting "ha-286000" primary control-plane node in "ha-286000" cluster
	I0816 10:01:48.653456    3758 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 10:01:48.653537    3758 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0816 10:01:48.653563    3758 cache.go:56] Caching tarball of preloaded images
	I0816 10:01:48.653777    3758 preload.go:172] Found /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0816 10:01:48.653813    3758 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 10:01:48.654332    3758 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:01:48.654370    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json: {Name:mk79fdae2e45cdfc987caaf16c5f211150e90dc5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:01:48.654995    3758 start.go:360] acquireMachinesLock for ha-286000: {Name:mk0510f0432b1e24fd07d899155e85dff8756d5a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:01:48.655106    3758 start.go:364] duration metric: took 87.458µs to acquireMachinesLock for "ha-286000"
	I0816 10:01:48.655154    3758 start.go:93] Provisioning new machine with config: &{Name:ha-286000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:ha-286000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:01:48.655246    3758 start.go:125] createHost starting for "" (driver="hyperkit")
	I0816 10:01:48.677450    3758 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0816 10:01:48.677701    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:01:48.677786    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:01:48.687593    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51004
	I0816 10:01:48.687935    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:01:48.688347    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:01:48.688357    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:01:48.688562    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:01:48.688668    3758 main.go:141] libmachine: (ha-286000) Calling .GetMachineName
	I0816 10:01:48.688765    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:01:48.688895    3758 start.go:159] libmachine.API.Create for "ha-286000" (driver="hyperkit")
	I0816 10:01:48.688916    3758 client.go:168] LocalClient.Create starting
	I0816 10:01:48.688948    3758 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem
	I0816 10:01:48.688999    3758 main.go:141] libmachine: Decoding PEM data...
	I0816 10:01:48.689012    3758 main.go:141] libmachine: Parsing certificate...
	I0816 10:01:48.689063    3758 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem
	I0816 10:01:48.689100    3758 main.go:141] libmachine: Decoding PEM data...
	I0816 10:01:48.689111    3758 main.go:141] libmachine: Parsing certificate...
	I0816 10:01:48.689128    3758 main.go:141] libmachine: Running pre-create checks...
	I0816 10:01:48.689135    3758 main.go:141] libmachine: (ha-286000) Calling .PreCreateCheck
	I0816 10:01:48.689224    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:48.689427    3758 main.go:141] libmachine: (ha-286000) Calling .GetConfigRaw
	I0816 10:01:48.698771    3758 main.go:141] libmachine: Creating machine...
	I0816 10:01:48.698794    3758 main.go:141] libmachine: (ha-286000) Calling .Create
	I0816 10:01:48.699018    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:48.699296    3758 main.go:141] libmachine: (ha-286000) DBG | I0816 10:01:48.699015    3766 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19461-1276/.minikube
	I0816 10:01:48.699450    3758 main.go:141] libmachine: (ha-286000) Downloading /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19461-1276/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0816 10:01:48.884950    3758 main.go:141] libmachine: (ha-286000) DBG | I0816 10:01:48.884833    3766 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa...
	I0816 10:01:48.986348    3758 main.go:141] libmachine: (ha-286000) DBG | I0816 10:01:48.986275    3766 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/ha-286000.rawdisk...
	I0816 10:01:48.986388    3758 main.go:141] libmachine: (ha-286000) DBG | Writing magic tar header
	I0816 10:01:48.986396    3758 main.go:141] libmachine: (ha-286000) DBG | Writing SSH key tar header
	I0816 10:01:48.987038    3758 main.go:141] libmachine: (ha-286000) DBG | I0816 10:01:48.987004    3766 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000 ...
	I0816 10:01:49.360700    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:49.360726    3758 main.go:141] libmachine: (ha-286000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/hyperkit.pid
	I0816 10:01:49.360741    3758 main.go:141] libmachine: (ha-286000) DBG | Using UUID ad96de67-e238-408c-89eb-d74e5b68d297
	I0816 10:01:49.471852    3758 main.go:141] libmachine: (ha-286000) DBG | Generated MAC 66:c8:48:4e:12:1b
	I0816 10:01:49.471873    3758 main.go:141] libmachine: (ha-286000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000
	I0816 10:01:49.471908    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"ad96de67-e238-408c-89eb-d74e5b68d297", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0000b2330)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 10:01:49.471951    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"ad96de67-e238-408c-89eb-d74e5b68d297", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0000b2330)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 10:01:49.471996    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "ad96de67-e238-408c-89eb-d74e5b68d297", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/ha-286000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/bzimage,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000"}
	I0816 10:01:49.472043    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U ad96de67-e238-408c-89eb-d74e5b68d297 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/ha-286000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/console-ring -f kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/bzimage,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000"
	I0816 10:01:49.472063    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0816 10:01:49.475179    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 DEBUG: hyperkit: Pid is 3771
	I0816 10:01:49.475583    3758 main.go:141] libmachine: (ha-286000) DBG | Attempt 0
	I0816 10:01:49.475597    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:49.475674    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:01:49.476510    3758 main.go:141] libmachine: (ha-286000) DBG | Searching for 66:c8:48:4e:12:1b in /var/db/dhcpd_leases ...
	I0816 10:01:49.476583    3758 main.go:141] libmachine: (ha-286000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0816 10:01:49.476592    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:01:49.476602    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:01:49.476612    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:01:49.482826    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0816 10:01:49.534430    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0816 10:01:49.535040    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:01:49.535056    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:01:49.535064    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:01:49.535072    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:01:49.909510    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0816 10:01:49.909527    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0816 10:01:50.024439    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:01:50.024459    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:01:50.024479    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:01:50.024494    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:01:50.025363    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:50 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0816 10:01:50.025375    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:50 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0816 10:01:51.477573    3758 main.go:141] libmachine: (ha-286000) DBG | Attempt 1
	I0816 10:01:51.477587    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:51.477660    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:01:51.478481    3758 main.go:141] libmachine: (ha-286000) DBG | Searching for 66:c8:48:4e:12:1b in /var/db/dhcpd_leases ...
	I0816 10:01:51.478504    3758 main.go:141] libmachine: (ha-286000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0816 10:01:51.478511    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:01:51.478520    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:01:51.478531    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:01:53.479582    3758 main.go:141] libmachine: (ha-286000) DBG | Attempt 2
	I0816 10:01:53.479599    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:53.479760    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:01:53.480630    3758 main.go:141] libmachine: (ha-286000) DBG | Searching for 66:c8:48:4e:12:1b in /var/db/dhcpd_leases ...
	I0816 10:01:53.480678    3758 main.go:141] libmachine: (ha-286000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0816 10:01:53.480688    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:01:53.480697    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:01:53.480710    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:01:55.482614    3758 main.go:141] libmachine: (ha-286000) DBG | Attempt 3
	I0816 10:01:55.482630    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:55.482703    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:01:55.483616    3758 main.go:141] libmachine: (ha-286000) DBG | Searching for 66:c8:48:4e:12:1b in /var/db/dhcpd_leases ...
	I0816 10:01:55.483662    3758 main.go:141] libmachine: (ha-286000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0816 10:01:55.483672    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:01:55.483679    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:01:55.483687    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:01:55.581983    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:55 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0816 10:01:55.582063    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:55 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0816 10:01:55.582075    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:55 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0816 10:01:55.606414    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:55 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0816 10:01:57.484306    3758 main.go:141] libmachine: (ha-286000) DBG | Attempt 4
	I0816 10:01:57.484322    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:57.484414    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:01:57.485196    3758 main.go:141] libmachine: (ha-286000) DBG | Searching for 66:c8:48:4e:12:1b in /var/db/dhcpd_leases ...
	I0816 10:01:57.485261    3758 main.go:141] libmachine: (ha-286000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0816 10:01:57.485273    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:01:57.485286    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:01:57.485294    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:01:59.485978    3758 main.go:141] libmachine: (ha-286000) DBG | Attempt 5
	I0816 10:01:59.486008    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:59.486192    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:01:59.487610    3758 main.go:141] libmachine: (ha-286000) DBG | Searching for 66:c8:48:4e:12:1b in /var/db/dhcpd_leases ...
	I0816 10:01:59.487709    3758 main.go:141] libmachine: (ha-286000) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0816 10:01:59.487730    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:01:59.487784    3758 main.go:141] libmachine: (ha-286000) DBG | Found match: 66:c8:48:4e:12:1b
	I0816 10:01:59.487797    3758 main.go:141] libmachine: (ha-286000) DBG | IP: 192.169.0.5
	I0816 10:01:59.487831    3758 main.go:141] libmachine: (ha-286000) Calling .GetConfigRaw
	I0816 10:01:59.488860    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:01:59.489069    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:01:59.489276    3758 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0816 10:01:59.489289    3758 main.go:141] libmachine: (ha-286000) Calling .GetState
	I0816 10:01:59.489463    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:59.489567    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:01:59.490615    3758 main.go:141] libmachine: Detecting operating system of created instance...
	I0816 10:01:59.490628    3758 main.go:141] libmachine: Waiting for SSH to be available...
	I0816 10:01:59.490644    3758 main.go:141] libmachine: Getting to WaitForSSH function...
	I0816 10:01:59.490652    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:01:59.490782    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:01:59.490923    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:01:59.491055    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:01:59.491190    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:01:59.491362    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:01:59.491595    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:01:59.491605    3758 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0816 10:01:59.511220    3758 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0816 10:02:02.573095    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 10:02:02.573108    3758 main.go:141] libmachine: Detecting the provisioner...
	I0816 10:02:02.573113    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:02.573255    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:02.573385    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:02.573489    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:02.573602    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:02.573753    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:02.573906    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:02:02.573914    3758 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0816 10:02:02.632510    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0816 10:02:02.632561    3758 main.go:141] libmachine: found compatible host: buildroot
	I0816 10:02:02.632568    3758 main.go:141] libmachine: Provisioning with buildroot...
	I0816 10:02:02.632573    3758 main.go:141] libmachine: (ha-286000) Calling .GetMachineName
	I0816 10:02:02.632721    3758 buildroot.go:166] provisioning hostname "ha-286000"
	I0816 10:02:02.632732    3758 main.go:141] libmachine: (ha-286000) Calling .GetMachineName
	I0816 10:02:02.632834    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:02.632956    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:02.633046    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:02.633122    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:02.633236    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:02.633363    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:02.633492    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:02:02.633500    3758 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-286000 && echo "ha-286000" | sudo tee /etc/hostname
	I0816 10:02:02.702336    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-286000
	
	I0816 10:02:02.702354    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:02.702482    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:02.702569    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:02.702670    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:02.702756    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:02.702893    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:02.703040    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:02:02.703052    3758 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-286000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-286000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-286000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 10:02:02.766997    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 10:02:02.767016    3758 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19461-1276/.minikube CaCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19461-1276/.minikube}
	I0816 10:02:02.767026    3758 buildroot.go:174] setting up certificates
	I0816 10:02:02.767038    3758 provision.go:84] configureAuth start
	I0816 10:02:02.767044    3758 main.go:141] libmachine: (ha-286000) Calling .GetMachineName
	I0816 10:02:02.767175    3758 main.go:141] libmachine: (ha-286000) Calling .GetIP
	I0816 10:02:02.767270    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:02.767372    3758 provision.go:143] copyHostCerts
	I0816 10:02:02.767406    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem
	I0816 10:02:02.767476    3758 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem, removing ...
	I0816 10:02:02.767485    3758 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem
	I0816 10:02:02.767630    3758 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem (1679 bytes)
	I0816 10:02:02.767837    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem
	I0816 10:02:02.767868    3758 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem, removing ...
	I0816 10:02:02.767873    3758 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem
	I0816 10:02:02.767991    3758 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem (1078 bytes)
	I0816 10:02:02.768146    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem
	I0816 10:02:02.768179    3758 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem, removing ...
	I0816 10:02:02.768184    3758 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem
	I0816 10:02:02.768268    3758 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem (1123 bytes)
	I0816 10:02:02.768410    3758 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem org=jenkins.ha-286000 san=[127.0.0.1 192.169.0.5 ha-286000 localhost minikube]
	I0816 10:02:02.915225    3758 provision.go:177] copyRemoteCerts
	I0816 10:02:02.915279    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 10:02:02.915299    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:02.915444    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:02.915558    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:02.915640    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:02.915723    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:02:02.953069    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0816 10:02:02.953145    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0816 10:02:02.973516    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0816 10:02:02.973600    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0816 10:02:02.993038    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0816 10:02:02.993100    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 10:02:03.013565    3758 provision.go:87] duration metric: took 246.514857ms to configureAuth
	I0816 10:02:03.013581    3758 buildroot.go:189] setting minikube options for container-runtime
	I0816 10:02:03.013724    3758 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:02:03.013737    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:03.013889    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:03.013978    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:03.014067    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:03.014147    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:03.014250    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:03.014369    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:03.014498    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:02:03.014506    3758 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0816 10:02:03.073645    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0816 10:02:03.073660    3758 buildroot.go:70] root file system type: tmpfs
	I0816 10:02:03.073734    3758 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0816 10:02:03.073745    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:03.073872    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:03.073966    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:03.074063    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:03.074164    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:03.074292    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:03.074429    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:02:03.074479    3758 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0816 10:02:03.148891    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0816 10:02:03.148912    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:03.149052    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:03.149138    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:03.149235    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:03.149324    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:03.149466    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:03.149604    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:02:03.149616    3758 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0816 10:02:04.780498    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0816 10:02:04.780519    3758 main.go:141] libmachine: Checking connection to Docker...
	I0816 10:02:04.780526    3758 main.go:141] libmachine: (ha-286000) Calling .GetURL
	I0816 10:02:04.780691    3758 main.go:141] libmachine: Docker is up and running!
	I0816 10:02:04.780698    3758 main.go:141] libmachine: Reticulating splines...
	I0816 10:02:04.780702    3758 client.go:171] duration metric: took 16.091876944s to LocalClient.Create
	I0816 10:02:04.780713    3758 start.go:167] duration metric: took 16.091916939s to libmachine.API.Create "ha-286000"
	I0816 10:02:04.780722    3758 start.go:293] postStartSetup for "ha-286000" (driver="hyperkit")
	I0816 10:02:04.780729    3758 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 10:02:04.780738    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:04.780873    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 10:02:04.780884    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:04.780956    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:04.781047    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:04.781154    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:04.781275    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:02:04.819650    3758 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 10:02:04.823314    3758 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 10:02:04.823335    3758 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19461-1276/.minikube/addons for local assets ...
	I0816 10:02:04.823443    3758 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19461-1276/.minikube/files for local assets ...
	I0816 10:02:04.823624    3758 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> 18312.pem in /etc/ssl/certs
	I0816 10:02:04.823631    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> /etc/ssl/certs/18312.pem
	I0816 10:02:04.823837    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 10:02:04.831860    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem --> /etc/ssl/certs/18312.pem (1708 bytes)
	I0816 10:02:04.850901    3758 start.go:296] duration metric: took 70.172878ms for postStartSetup
	I0816 10:02:04.850935    3758 main.go:141] libmachine: (ha-286000) Calling .GetConfigRaw
	I0816 10:02:04.851561    3758 main.go:141] libmachine: (ha-286000) Calling .GetIP
	I0816 10:02:04.851702    3758 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:02:04.852053    3758 start.go:128] duration metric: took 16.196888252s to createHost
	I0816 10:02:04.852071    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:04.852167    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:04.852261    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:04.852344    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:04.852420    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:04.852525    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:04.852648    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:02:04.852655    3758 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 10:02:04.911299    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723827724.986438448
	
	I0816 10:02:04.911317    3758 fix.go:216] guest clock: 1723827724.986438448
	I0816 10:02:04.911322    3758 fix.go:229] Guest: 2024-08-16 10:02:04.986438448 -0700 PDT Remote: 2024-08-16 10:02:04.852061 -0700 PDT m=+16.776125584 (delta=134.377448ms)
	I0816 10:02:04.911341    3758 fix.go:200] guest clock delta is within tolerance: 134.377448ms
	I0816 10:02:04.911345    3758 start.go:83] releasing machines lock for "ha-286000", held for 16.256325696s
	I0816 10:02:04.911364    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:04.911498    3758 main.go:141] libmachine: (ha-286000) Calling .GetIP
	I0816 10:02:04.911592    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:04.911942    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:04.912068    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:04.912144    3758 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 10:02:04.912171    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:04.912218    3758 ssh_runner.go:195] Run: cat /version.json
	I0816 10:02:04.912231    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:04.912262    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:04.912305    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:04.912360    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:04.912384    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:04.912437    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:04.912464    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:04.912547    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:02:04.912567    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:02:04.945688    3758 ssh_runner.go:195] Run: systemctl --version
	I0816 10:02:04.997158    3758 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 10:02:05.002278    3758 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 10:02:05.002315    3758 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 10:02:05.015089    3758 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 10:02:05.015098    3758 start.go:495] detecting cgroup driver to use...
	I0816 10:02:05.015195    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 10:02:05.030338    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0816 10:02:05.038588    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0816 10:02:05.046898    3758 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0816 10:02:05.046932    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0816 10:02:05.055211    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 10:02:05.063533    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0816 10:02:05.073244    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 10:02:05.081895    3758 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 10:02:05.090915    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0816 10:02:05.099773    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0816 10:02:05.108719    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0816 10:02:05.117772    3758 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 10:02:05.125574    3758 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 10:02:05.145403    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:05.261568    3758 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0816 10:02:05.278920    3758 start.go:495] detecting cgroup driver to use...
	I0816 10:02:05.278996    3758 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0816 10:02:05.293925    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 10:02:05.306704    3758 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 10:02:05.320000    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 10:02:05.330230    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 10:02:05.340255    3758 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0816 10:02:05.369098    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 10:02:05.379374    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 10:02:05.395120    3758 ssh_runner.go:195] Run: which cri-dockerd
	I0816 10:02:05.397921    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0816 10:02:05.404866    3758 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0816 10:02:05.417935    3758 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0816 10:02:05.514793    3758 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0816 10:02:05.626208    3758 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0816 10:02:05.626292    3758 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0816 10:02:05.640317    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:05.737978    3758 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0816 10:02:08.105430    3758 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.367458496s)
	I0816 10:02:08.105491    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0816 10:02:08.115970    3758 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0816 10:02:08.129325    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0816 10:02:08.140312    3758 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0816 10:02:08.237628    3758 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0816 10:02:08.340285    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:08.436168    3758 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0816 10:02:08.457808    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0816 10:02:08.468314    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:08.561830    3758 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0816 10:02:08.620080    3758 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0816 10:02:08.620164    3758 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0816 10:02:08.624716    3758 start.go:563] Will wait 60s for crictl version
	I0816 10:02:08.624764    3758 ssh_runner.go:195] Run: which crictl
	I0816 10:02:08.627806    3758 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 10:02:08.660437    3758 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0816 10:02:08.660505    3758 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0816 10:02:08.678073    3758 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0816 10:02:08.722165    3758 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0816 10:02:08.722217    3758 main.go:141] libmachine: (ha-286000) Calling .GetIP
	I0816 10:02:08.722605    3758 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0816 10:02:08.727140    3758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 10:02:08.736769    3758 kubeadm.go:883] updating cluster {Name:ha-286000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:ha-286000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 10:02:08.736830    3758 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 10:02:08.736887    3758 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0816 10:02:08.748223    3758 docker.go:685] Got preloaded images: 
	I0816 10:02:08.748236    3758 docker.go:691] registry.k8s.io/kube-apiserver:v1.31.0 wasn't preloaded
	I0816 10:02:08.748294    3758 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0816 10:02:08.755601    3758 ssh_runner.go:195] Run: which lz4
	I0816 10:02:08.758510    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0816 10:02:08.758630    3758 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 10:02:08.761594    3758 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 10:02:08.761610    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (342554258 bytes)
	I0816 10:02:09.771496    3758 docker.go:649] duration metric: took 1.012922149s to copy over tarball
	I0816 10:02:09.771559    3758 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 10:02:12.050040    3758 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.27849665s)
	I0816 10:02:12.050054    3758 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 10:02:12.075438    3758 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0816 10:02:12.083331    3758 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0816 10:02:12.097261    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:12.204348    3758 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0816 10:02:14.516272    3758 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.311936705s)
	I0816 10:02:14.516363    3758 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0816 10:02:14.529500    3758 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0816 10:02:14.529519    3758 cache_images.go:84] Images are preloaded, skipping loading
	I0816 10:02:14.529530    3758 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.0 docker true true} ...
	I0816 10:02:14.529607    3758 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-286000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-286000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 10:02:14.529674    3758 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0816 10:02:14.568093    3758 cni.go:84] Creating CNI manager for ""
	I0816 10:02:14.568107    3758 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0816 10:02:14.568120    3758 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 10:02:14.568134    3758 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-286000 NodeName:ha-286000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 10:02:14.568222    3758 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-286000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 10:02:14.568237    3758 kube-vip.go:115] generating kube-vip config ...
	I0816 10:02:14.568286    3758 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0816 10:02:14.580706    3758 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0816 10:02:14.580778    3758 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0816 10:02:14.580832    3758 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 10:02:14.588124    3758 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 10:02:14.588174    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0816 10:02:14.595283    3758 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0816 10:02:14.608721    3758 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 10:02:14.622036    3758 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0816 10:02:14.635525    3758 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1446 bytes)
	I0816 10:02:14.648868    3758 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0816 10:02:14.651748    3758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 10:02:14.661305    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:14.752561    3758 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 10:02:14.768967    3758 certs.go:68] Setting up /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000 for IP: 192.169.0.5
	I0816 10:02:14.768980    3758 certs.go:194] generating shared ca certs ...
	I0816 10:02:14.768991    3758 certs.go:226] acquiring lock for ca certs: {Name:mka549fbc467849834d2267da204028c49d88a3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:14.769183    3758 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key
	I0816 10:02:14.769258    3758 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key
	I0816 10:02:14.769269    3758 certs.go:256] generating profile certs ...
	I0816 10:02:14.769315    3758 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.key
	I0816 10:02:14.769328    3758 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.crt with IP's: []
	I0816 10:02:14.849573    3758 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.crt ...
	I0816 10:02:14.849589    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.crt: {Name:mk9c6a2b5871e8d33f12e0a58268b028ea37ab4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:14.849911    3758 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.key ...
	I0816 10:02:14.849919    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.key: {Name:mk7d8ed264e583d7ced6b16b60c72c11b15a1f22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:14.850137    3758 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.af99fd6a
	I0816 10:02:14.850151    3758 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.af99fd6a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.254]
	I0816 10:02:14.968129    3758 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.af99fd6a ...
	I0816 10:02:14.968143    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.af99fd6a: {Name:mk3b8945a16852dca8274ec1ab3a9f2eade80831 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:14.968464    3758 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.af99fd6a ...
	I0816 10:02:14.968473    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.af99fd6a: {Name:mk90fcce3348a214c4733fe5a1fb806faff7773f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:14.968717    3758 certs.go:381] copying /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.af99fd6a -> /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt
	I0816 10:02:14.968940    3758 certs.go:385] copying /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.af99fd6a -> /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key
	I0816 10:02:14.969171    3758 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key
	I0816 10:02:14.969185    3758 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.crt with IP's: []
	I0816 10:02:15.029329    3758 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.crt ...
	I0816 10:02:15.029338    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.crt: {Name:mk70aaa3903fb58df2124d779dbea07aa95769f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:15.029604    3758 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key ...
	I0816 10:02:15.029611    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key: {Name:mka3a85354927e8da31f9dfc67f92dea3c77ca65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:15.029850    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0816 10:02:15.029876    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0816 10:02:15.029898    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0816 10:02:15.029940    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0816 10:02:15.029958    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0816 10:02:15.029990    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0816 10:02:15.030008    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0816 10:02:15.030025    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0816 10:02:15.030118    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem (1338 bytes)
	W0816 10:02:15.030163    3758 certs.go:480] ignoring /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831_empty.pem, impossibly tiny 0 bytes
	I0816 10:02:15.030171    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 10:02:15.030240    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem (1078 bytes)
	I0816 10:02:15.030268    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem (1123 bytes)
	I0816 10:02:15.030354    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem (1679 bytes)
	I0816 10:02:15.030467    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem (1708 bytes)
	I0816 10:02:15.030499    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:02:15.030525    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem -> /usr/share/ca-certificates/1831.pem
	I0816 10:02:15.030542    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> /usr/share/ca-certificates/18312.pem
	I0816 10:02:15.031030    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 10:02:15.051618    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0816 10:02:15.071318    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 10:02:15.091017    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 10:02:15.110497    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0816 10:02:15.131033    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 10:02:15.150747    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 10:02:15.170186    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 10:02:15.189808    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 10:02:15.209868    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem --> /usr/share/ca-certificates/1831.pem (1338 bytes)
	I0816 10:02:15.229378    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem --> /usr/share/ca-certificates/18312.pem (1708 bytes)
	I0816 10:02:15.248667    3758 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 10:02:15.262035    3758 ssh_runner.go:195] Run: openssl version
	I0816 10:02:15.266135    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 10:02:15.275959    3758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:02:15.279367    3758 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 16:48 /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:02:15.279403    3758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:02:15.283611    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 10:02:15.291872    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1831.pem && ln -fs /usr/share/ca-certificates/1831.pem /etc/ssl/certs/1831.pem"
	I0816 10:02:15.299890    3758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1831.pem
	I0816 10:02:15.303179    3758 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 16:57 /usr/share/ca-certificates/1831.pem
	I0816 10:02:15.303212    3758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1831.pem
	I0816 10:02:15.307423    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1831.pem /etc/ssl/certs/51391683.0"
	I0816 10:02:15.315741    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18312.pem && ln -fs /usr/share/ca-certificates/18312.pem /etc/ssl/certs/18312.pem"
	I0816 10:02:15.324088    3758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18312.pem
	I0816 10:02:15.327441    3758 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 16:57 /usr/share/ca-certificates/18312.pem
	I0816 10:02:15.327479    3758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18312.pem
	I0816 10:02:15.331723    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18312.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 10:02:15.339921    3758 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 10:02:15.343061    3758 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0816 10:02:15.343109    3758 kubeadm.go:392] StartCluster: {Name:ha-286000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:ha-286000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 10:02:15.343194    3758 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0816 10:02:15.356016    3758 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 10:02:15.363405    3758 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 10:02:15.370635    3758 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 10:02:15.377806    3758 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 10:02:15.377819    3758 kubeadm.go:157] found existing configuration files:
	
	I0816 10:02:15.377855    3758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 10:02:15.384868    3758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 10:02:15.384903    3758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 10:02:15.392001    3758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 10:02:15.398935    3758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 10:02:15.398971    3758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 10:02:15.406181    3758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 10:02:15.413108    3758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 10:02:15.413142    3758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 10:02:15.422603    3758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 10:02:15.435692    3758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 10:02:15.435749    3758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 10:02:15.450920    3758 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 10:02:15.521417    3758 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0816 10:02:15.521501    3758 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 10:02:15.590843    3758 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 10:02:15.590923    3758 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 10:02:15.591008    3758 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0816 10:02:15.600874    3758 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 10:02:15.632377    3758 out.go:235]   - Generating certificates and keys ...
	I0816 10:02:15.632427    3758 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 10:02:15.632475    3758 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 10:02:15.791885    3758 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0816 10:02:15.852803    3758 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0816 10:02:15.961982    3758 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0816 10:02:16.146557    3758 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0816 10:02:16.493401    3758 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0816 10:02:16.493556    3758 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-286000 localhost] and IPs [192.169.0.5 127.0.0.1 ::1]
	I0816 10:02:16.704832    3758 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0816 10:02:16.705108    3758 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-286000 localhost] and IPs [192.169.0.5 127.0.0.1 ::1]
	I0816 10:02:16.922818    3758 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0816 10:02:17.082810    3758 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0816 10:02:17.393939    3758 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0816 10:02:17.394070    3758 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 10:02:17.465159    3758 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 10:02:17.731984    3758 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0816 10:02:18.019833    3758 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 10:02:18.164168    3758 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 10:02:18.273756    3758 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 10:02:18.274215    3758 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 10:02:18.275949    3758 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 10:02:18.300490    3758 out.go:235]   - Booting up control plane ...
	I0816 10:02:18.300569    3758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 10:02:18.300638    3758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 10:02:18.300693    3758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 10:02:18.300780    3758 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 10:02:18.300850    3758 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 10:02:18.300879    3758 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 10:02:18.405245    3758 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0816 10:02:18.405330    3758 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0816 10:02:18.909805    3758 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 504.760374ms
	I0816 10:02:18.909928    3758 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0816 10:02:24.844637    3758 kubeadm.go:310] [api-check] The API server is healthy after 5.938882642s
	I0816 10:02:24.854864    3758 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0816 10:02:24.862134    3758 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0816 10:02:24.890940    3758 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0816 10:02:24.891101    3758 kubeadm.go:310] [mark-control-plane] Marking the node ha-286000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0816 10:02:24.901441    3758 kubeadm.go:310] [bootstrap-token] Using token: 73merd.1elxantqs1p5mkiz
	I0816 10:02:24.939545    3758 out.go:235]   - Configuring RBAC rules ...
	I0816 10:02:24.939737    3758 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0816 10:02:24.944670    3758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0816 10:02:24.951393    3758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0816 10:02:24.953383    3758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0816 10:02:24.955899    3758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0816 10:02:24.958387    3758 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0816 10:02:25.251799    3758 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0816 10:02:25.676108    3758 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0816 10:02:26.249233    3758 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0816 10:02:26.249936    3758 kubeadm.go:310] 
	I0816 10:02:26.250001    3758 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0816 10:02:26.250014    3758 kubeadm.go:310] 
	I0816 10:02:26.250090    3758 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0816 10:02:26.250102    3758 kubeadm.go:310] 
	I0816 10:02:26.250122    3758 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0816 10:02:26.250175    3758 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0816 10:02:26.250221    3758 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0816 10:02:26.250229    3758 kubeadm.go:310] 
	I0816 10:02:26.250268    3758 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0816 10:02:26.250272    3758 kubeadm.go:310] 
	I0816 10:02:26.250305    3758 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0816 10:02:26.250313    3758 kubeadm.go:310] 
	I0816 10:02:26.250359    3758 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0816 10:02:26.250425    3758 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0816 10:02:26.250484    3758 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0816 10:02:26.250491    3758 kubeadm.go:310] 
	I0816 10:02:26.250569    3758 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0816 10:02:26.250648    3758 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0816 10:02:26.250656    3758 kubeadm.go:310] 
	I0816 10:02:26.250716    3758 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 73merd.1elxantqs1p5mkiz \
	I0816 10:02:26.250793    3758 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:995590413ea712c4f7db2cf849d28264ebc291e6cd53d741ce1d22c1daaa1602 \
	I0816 10:02:26.250811    3758 kubeadm.go:310] 	--control-plane 
	I0816 10:02:26.250817    3758 kubeadm.go:310] 
	I0816 10:02:26.250879    3758 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0816 10:02:26.250885    3758 kubeadm.go:310] 
	I0816 10:02:26.250947    3758 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 73merd.1elxantqs1p5mkiz \
	I0816 10:02:26.251036    3758 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:995590413ea712c4f7db2cf849d28264ebc291e6cd53d741ce1d22c1daaa1602 
	I0816 10:02:26.251297    3758 kubeadm.go:310] W0816 17:02:15.599099    1566 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 10:02:26.251521    3758 kubeadm.go:310] W0816 17:02:15.600578    1566 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 10:02:26.251608    3758 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 10:02:26.251620    3758 cni.go:84] Creating CNI manager for ""
	I0816 10:02:26.251624    3758 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0816 10:02:26.289324    3758 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0816 10:02:26.363318    3758 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0816 10:02:26.368971    3758 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0816 10:02:26.368983    3758 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0816 10:02:26.382855    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0816 10:02:26.629869    3758 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 10:02:26.629947    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-286000 minikube.k8s.io/updated_at=2024_08_16T10_02_26_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd minikube.k8s.io/name=ha-286000 minikube.k8s.io/primary=true
	I0816 10:02:26.629949    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 10:02:26.658712    3758 ops.go:34] apiserver oom_adj: -16
	I0816 10:02:26.756402    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 10:02:27.257667    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 10:02:27.757259    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 10:02:28.256864    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 10:02:28.757602    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 10:02:29.256802    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 10:02:29.757282    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 10:02:29.882342    3758 kubeadm.go:1113] duration metric: took 3.252511045s to wait for elevateKubeSystemPrivileges
	I0816 10:02:29.882365    3758 kubeadm.go:394] duration metric: took 14.539506386s to StartCluster
	I0816 10:02:29.882380    3758 settings.go:142] acquiring lock: {Name:mk60e377244eac756871b74b6bd45115654652a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:29.882471    3758 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19461-1276/kubeconfig
	I0816 10:02:29.882922    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/kubeconfig: {Name:mk7f4491c8ec5cf96c96a5a1a5dbfba4301394a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:29.883167    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0816 10:02:29.883170    3758 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:02:29.883182    3758 start.go:241] waiting for startup goroutines ...
	I0816 10:02:29.883204    3758 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 10:02:29.883238    3758 addons.go:69] Setting storage-provisioner=true in profile "ha-286000"
	I0816 10:02:29.883244    3758 addons.go:69] Setting default-storageclass=true in profile "ha-286000"
	I0816 10:02:29.883261    3758 addons.go:234] Setting addon storage-provisioner=true in "ha-286000"
	I0816 10:02:29.883278    3758 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-286000"
	I0816 10:02:29.883280    3758 host.go:66] Checking if "ha-286000" exists ...
	I0816 10:02:29.883305    3758 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:02:29.883558    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:02:29.883573    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:02:29.883575    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:02:29.883598    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:02:29.892969    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51028
	I0816 10:02:29.893238    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51030
	I0816 10:02:29.893356    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:02:29.893605    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:02:29.893730    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:02:29.893741    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:02:29.893938    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:02:29.893951    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:02:29.893976    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:02:29.894142    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:02:29.894254    3758 main.go:141] libmachine: (ha-286000) Calling .GetState
	I0816 10:02:29.894338    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:29.894365    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:02:29.894397    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:02:29.894424    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:02:29.896690    3758 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19461-1276/kubeconfig
	I0816 10:02:29.896968    3758 kapi.go:59] client config for ha-286000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.key", CAFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xa202f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0816 10:02:29.897373    3758 cert_rotation.go:140] Starting client certificate rotation controller
	I0816 10:02:29.897530    3758 addons.go:234] Setting addon default-storageclass=true in "ha-286000"
	I0816 10:02:29.897550    3758 host.go:66] Checking if "ha-286000" exists ...
	I0816 10:02:29.897771    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:02:29.897789    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:02:29.903669    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51032
	I0816 10:02:29.904077    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:02:29.904431    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:02:29.904451    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:02:29.904736    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:02:29.904889    3758 main.go:141] libmachine: (ha-286000) Calling .GetState
	I0816 10:02:29.904993    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:29.905054    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:02:29.906086    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:29.906730    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51034
	I0816 10:02:29.907081    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:02:29.907463    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:02:29.907491    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:02:29.907694    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:02:29.908045    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:02:29.908061    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:02:29.917300    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51036
	I0816 10:02:29.917667    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:02:29.918020    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:02:29.918034    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:02:29.918277    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:02:29.918384    3758 main.go:141] libmachine: (ha-286000) Calling .GetState
	I0816 10:02:29.918483    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:29.918572    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:02:29.919534    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:29.919671    3758 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 10:02:29.919680    3758 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 10:02:29.919688    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:29.919789    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:29.919896    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:29.919990    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:29.920072    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:02:29.928735    3758 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 10:02:29.965711    3758 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 10:02:29.965725    3758 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 10:02:29.965744    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:29.965926    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:29.966043    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:29.966151    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:29.966280    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:02:30.033245    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.169.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0816 10:02:30.037035    3758 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 10:02:30.090040    3758 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 10:02:30.396267    3758 start.go:971] {"host.minikube.internal": 192.169.0.1} host record injected into CoreDNS's ConfigMap
	I0816 10:02:30.396311    3758 main.go:141] libmachine: Making call to close driver server
	I0816 10:02:30.396323    3758 main.go:141] libmachine: (ha-286000) Calling .Close
	I0816 10:02:30.396482    3758 main.go:141] libmachine: Successfully made call to close driver server
	I0816 10:02:30.396488    3758 main.go:141] libmachine: (ha-286000) DBG | Closing plugin on server side
	I0816 10:02:30.396492    3758 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 10:02:30.396501    3758 main.go:141] libmachine: Making call to close driver server
	I0816 10:02:30.396507    3758 main.go:141] libmachine: (ha-286000) Calling .Close
	I0816 10:02:30.396655    3758 main.go:141] libmachine: Successfully made call to close driver server
	I0816 10:02:30.396662    3758 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 10:02:30.396713    3758 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0816 10:02:30.396725    3758 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0816 10:02:30.396804    3758 round_trippers.go:463] GET https://192.169.0.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0816 10:02:30.396810    3758 round_trippers.go:469] Request Headers:
	I0816 10:02:30.396817    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:02:30.396822    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:02:30.402286    3758 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0816 10:02:30.402706    3758 round_trippers.go:463] PUT https://192.169.0.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0816 10:02:30.402713    3758 round_trippers.go:469] Request Headers:
	I0816 10:02:30.402718    3758 round_trippers.go:473]     Content-Type: application/json
	I0816 10:02:30.402721    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:02:30.402723    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:02:30.404290    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:02:30.404403    3758 main.go:141] libmachine: Making call to close driver server
	I0816 10:02:30.404416    3758 main.go:141] libmachine: (ha-286000) Calling .Close
	I0816 10:02:30.404565    3758 main.go:141] libmachine: Successfully made call to close driver server
	I0816 10:02:30.404575    3758 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 10:02:30.404586    3758 main.go:141] libmachine: (ha-286000) DBG | Closing plugin on server side
	I0816 10:02:30.497806    3758 main.go:141] libmachine: Making call to close driver server
	I0816 10:02:30.497818    3758 main.go:141] libmachine: (ha-286000) Calling .Close
	I0816 10:02:30.498006    3758 main.go:141] libmachine: Successfully made call to close driver server
	I0816 10:02:30.498011    3758 main.go:141] libmachine: (ha-286000) DBG | Closing plugin on server side
	I0816 10:02:30.498016    3758 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 10:02:30.498023    3758 main.go:141] libmachine: Making call to close driver server
	I0816 10:02:30.498040    3758 main.go:141] libmachine: (ha-286000) Calling .Close
	I0816 10:02:30.498160    3758 main.go:141] libmachine: Successfully made call to close driver server
	I0816 10:02:30.498168    3758 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 10:02:30.498179    3758 main.go:141] libmachine: (ha-286000) DBG | Closing plugin on server side
	I0816 10:02:30.538607    3758 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0816 10:02:30.596522    3758 addons.go:510] duration metric: took 713.336883ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0816 10:02:30.596578    3758 start.go:246] waiting for cluster config update ...
	I0816 10:02:30.596599    3758 start.go:255] writing updated cluster config ...
	I0816 10:02:30.634683    3758 out.go:201] 
	I0816 10:02:30.655754    3758 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:02:30.655861    3758 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:02:30.677634    3758 out.go:177] * Starting "ha-286000-m02" control-plane node in "ha-286000" cluster
	I0816 10:02:30.737443    3758 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 10:02:30.737475    3758 cache.go:56] Caching tarball of preloaded images
	I0816 10:02:30.737660    3758 preload.go:172] Found /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0816 10:02:30.737679    3758 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 10:02:30.737771    3758 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:02:30.738414    3758 start.go:360] acquireMachinesLock for ha-286000-m02: {Name:mk0510f0432b1e24fd07d899155e85dff8756d5a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:02:30.738494    3758 start.go:364] duration metric: took 63.585µs to acquireMachinesLock for "ha-286000-m02"
	I0816 10:02:30.738517    3758 start.go:93] Provisioning new machine with config: &{Name:ha-286000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:ha-286000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:02:30.738590    3758 start.go:125] createHost starting for "m02" (driver="hyperkit")
	I0816 10:02:30.760743    3758 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0816 10:02:30.760905    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:02:30.760935    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:02:30.771028    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51041
	I0816 10:02:30.771383    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:02:30.771745    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:02:30.771760    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:02:30.771967    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:02:30.772077    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetMachineName
	I0816 10:02:30.772157    3758 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:02:30.772261    3758 start.go:159] libmachine.API.Create for "ha-286000" (driver="hyperkit")
	I0816 10:02:30.772276    3758 client.go:168] LocalClient.Create starting
	I0816 10:02:30.772303    3758 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem
	I0816 10:02:30.772358    3758 main.go:141] libmachine: Decoding PEM data...
	I0816 10:02:30.772368    3758 main.go:141] libmachine: Parsing certificate...
	I0816 10:02:30.772413    3758 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem
	I0816 10:02:30.772450    3758 main.go:141] libmachine: Decoding PEM data...
	I0816 10:02:30.772460    3758 main.go:141] libmachine: Parsing certificate...
	I0816 10:02:30.772472    3758 main.go:141] libmachine: Running pre-create checks...
	I0816 10:02:30.772477    3758 main.go:141] libmachine: (ha-286000-m02) Calling .PreCreateCheck
	I0816 10:02:30.772545    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:30.772568    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetConfigRaw
	I0816 10:02:30.781772    3758 main.go:141] libmachine: Creating machine...
	I0816 10:02:30.781789    3758 main.go:141] libmachine: (ha-286000-m02) Calling .Create
	I0816 10:02:30.781943    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:30.782181    3758 main.go:141] libmachine: (ha-286000-m02) DBG | I0816 10:02:30.781934    3805 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19461-1276/.minikube
	I0816 10:02:30.782269    3758 main.go:141] libmachine: (ha-286000-m02) Downloading /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19461-1276/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0816 10:02:30.982082    3758 main.go:141] libmachine: (ha-286000-m02) DBG | I0816 10:02:30.982020    3805 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/id_rsa...
	I0816 10:02:31.266609    3758 main.go:141] libmachine: (ha-286000-m02) DBG | I0816 10:02:31.266514    3805 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/ha-286000-m02.rawdisk...
	I0816 10:02:31.266629    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Writing magic tar header
	I0816 10:02:31.266638    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Writing SSH key tar header
	I0816 10:02:31.267382    3758 main.go:141] libmachine: (ha-286000-m02) DBG | I0816 10:02:31.267242    3805 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02 ...
	I0816 10:02:31.642864    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:31.642889    3758 main.go:141] libmachine: (ha-286000-m02) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/hyperkit.pid
	I0816 10:02:31.642912    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Using UUID f7630301-2bc3-4935-a751-ac72999da031
	I0816 10:02:31.669349    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Generated MAC 72:69:8f:11:68:1d
	I0816 10:02:31.669369    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000
	I0816 10:02:31.669397    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"f7630301-2bc3-4935-a751-ac72999da031", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 10:02:31.669426    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"f7630301-2bc3-4935-a751-ac72999da031", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 10:02:31.669455    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "f7630301-2bc3-4935-a751-ac72999da031", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/ha-286000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/bzimage,/Users/jenkins/minikube-integration/19461-1276/.minikube/machine
s/ha-286000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000"}
	I0816 10:02:31.669484    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U f7630301-2bc3-4935-a751-ac72999da031 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/ha-286000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/bzimage,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000"
	I0816 10:02:31.669499    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0816 10:02:31.672492    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 DEBUG: hyperkit: Pid is 3806
	I0816 10:02:31.672957    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Attempt 0
	I0816 10:02:31.673000    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:31.673054    3758 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid from json: 3806
	I0816 10:02:31.674014    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Searching for 72:69:8f:11:68:1d in /var/db/dhcpd_leases ...
	I0816 10:02:31.674086    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0816 10:02:31.674098    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:02:31.674120    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:02:31.674129    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:02:31.674137    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:02:31.680025    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0816 10:02:31.688109    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0816 10:02:31.688968    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:02:31.688993    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:02:31.689006    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:02:31.689016    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:02:32.077133    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:32 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0816 10:02:32.077153    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:32 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0816 10:02:32.191789    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:02:32.191806    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:02:32.191814    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:02:32.191825    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:02:32.192676    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:32 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0816 10:02:32.192686    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:32 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0816 10:02:33.675165    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Attempt 1
	I0816 10:02:33.675183    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:33.675297    3758 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid from json: 3806
	I0816 10:02:33.676089    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Searching for 72:69:8f:11:68:1d in /var/db/dhcpd_leases ...
	I0816 10:02:33.676141    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0816 10:02:33.676153    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:02:33.676162    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:02:33.676169    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:02:33.676193    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:02:35.676266    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Attempt 2
	I0816 10:02:35.676285    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:35.676360    3758 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid from json: 3806
	I0816 10:02:35.677182    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Searching for 72:69:8f:11:68:1d in /var/db/dhcpd_leases ...
	I0816 10:02:35.677237    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0816 10:02:35.677248    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:02:35.677262    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:02:35.677270    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:02:35.677281    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:02:37.678515    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Attempt 3
	I0816 10:02:37.678532    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:37.678572    3758 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid from json: 3806
	I0816 10:02:37.679405    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Searching for 72:69:8f:11:68:1d in /var/db/dhcpd_leases ...
	I0816 10:02:37.679441    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0816 10:02:37.679451    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:02:37.679461    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:02:37.679468    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:02:37.679483    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:02:37.782496    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:37 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0816 10:02:37.782531    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:37 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0816 10:02:37.782540    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:37 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0816 10:02:37.806630    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:37 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0816 10:02:39.680161    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Attempt 4
	I0816 10:02:39.680178    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:39.680273    3758 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid from json: 3806
	I0816 10:02:39.681064    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Searching for 72:69:8f:11:68:1d in /var/db/dhcpd_leases ...
	I0816 10:02:39.681112    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0816 10:02:39.681136    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:02:39.681155    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:02:39.681170    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:02:39.681181    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:02:41.681380    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Attempt 5
	I0816 10:02:41.681414    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:41.681563    3758 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid from json: 3806
	I0816 10:02:41.682343    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Searching for 72:69:8f:11:68:1d in /var/db/dhcpd_leases ...
	I0816 10:02:41.682392    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0816 10:02:41.682406    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0d7b0}
	I0816 10:02:41.682415    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Found match: 72:69:8f:11:68:1d
	I0816 10:02:41.682421    3758 main.go:141] libmachine: (ha-286000-m02) DBG | IP: 192.169.0.6
	I0816 10:02:41.682475    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetConfigRaw
	I0816 10:02:41.683135    3758 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:02:41.683257    3758 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:02:41.683358    3758 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0816 10:02:41.683367    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetState
	I0816 10:02:41.683478    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:41.683537    3758 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid from json: 3806
	I0816 10:02:41.684414    3758 main.go:141] libmachine: Detecting operating system of created instance...
	I0816 10:02:41.684423    3758 main.go:141] libmachine: Waiting for SSH to be available...
	I0816 10:02:41.684427    3758 main.go:141] libmachine: Getting to WaitForSSH function...
	I0816 10:02:41.684431    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:41.684566    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:41.684692    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:41.684809    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:41.684943    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:41.685095    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:41.685300    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:02:41.685308    3758 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0816 10:02:42.746559    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 10:02:42.746571    3758 main.go:141] libmachine: Detecting the provisioner...
	I0816 10:02:42.746577    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:42.746714    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:42.746824    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:42.746914    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:42.746988    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:42.747112    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:42.747269    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:02:42.747277    3758 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0816 10:02:42.811630    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0816 10:02:42.811666    3758 main.go:141] libmachine: found compatible host: buildroot
	I0816 10:02:42.811672    3758 main.go:141] libmachine: Provisioning with buildroot...
	I0816 10:02:42.811681    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetMachineName
	I0816 10:02:42.811816    3758 buildroot.go:166] provisioning hostname "ha-286000-m02"
	I0816 10:02:42.811828    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetMachineName
	I0816 10:02:42.811943    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:42.812045    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:42.812136    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:42.812274    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:42.812376    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:42.812513    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:42.812673    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:02:42.812682    3758 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-286000-m02 && echo "ha-286000-m02" | sudo tee /etc/hostname
	I0816 10:02:42.887531    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-286000-m02
	
	I0816 10:02:42.887545    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:42.887676    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:42.887769    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:42.887867    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:42.887953    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:42.888075    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:42.888214    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:02:42.888225    3758 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-286000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-286000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-286000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 10:02:42.959625    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 10:02:42.959641    3758 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19461-1276/.minikube CaCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19461-1276/.minikube}
	I0816 10:02:42.959649    3758 buildroot.go:174] setting up certificates
	I0816 10:02:42.959661    3758 provision.go:84] configureAuth start
	I0816 10:02:42.959666    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetMachineName
	I0816 10:02:42.959803    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetIP
	I0816 10:02:42.959899    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:42.959980    3758 provision.go:143] copyHostCerts
	I0816 10:02:42.960007    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem
	I0816 10:02:42.960055    3758 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem, removing ...
	I0816 10:02:42.960061    3758 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem
	I0816 10:02:42.960954    3758 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem (1078 bytes)
	I0816 10:02:42.961160    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem
	I0816 10:02:42.961190    3758 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem, removing ...
	I0816 10:02:42.961195    3758 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem
	I0816 10:02:42.961273    3758 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem (1123 bytes)
	I0816 10:02:42.961439    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem
	I0816 10:02:42.961509    3758 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem, removing ...
	I0816 10:02:42.961515    3758 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem
	I0816 10:02:42.961594    3758 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem (1679 bytes)
	I0816 10:02:42.961746    3758 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem org=jenkins.ha-286000-m02 san=[127.0.0.1 192.169.0.6 ha-286000-m02 localhost minikube]
	I0816 10:02:43.093511    3758 provision.go:177] copyRemoteCerts
	I0816 10:02:43.093556    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 10:02:43.093572    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:43.093719    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:43.093818    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:43.093917    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:43.094005    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/id_rsa Username:docker}
	I0816 10:02:43.132229    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0816 10:02:43.132299    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0816 10:02:43.151814    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0816 10:02:43.151873    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0816 10:02:43.171464    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0816 10:02:43.171532    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 10:02:43.191045    3758 provision.go:87] duration metric: took 231.381757ms to configureAuth
	I0816 10:02:43.191057    3758 buildroot.go:189] setting minikube options for container-runtime
	I0816 10:02:43.191187    3758 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:02:43.191200    3758 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:02:43.191321    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:43.191410    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:43.191497    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:43.191580    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:43.191658    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:43.191759    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:43.191891    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:02:43.191898    3758 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0816 10:02:43.256733    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0816 10:02:43.256744    3758 buildroot.go:70] root file system type: tmpfs
	I0816 10:02:43.256823    3758 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0816 10:02:43.256835    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:43.256961    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:43.257048    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:43.257142    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:43.257220    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:43.257364    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:43.257505    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:02:43.257553    3758 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0816 10:02:43.330709    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0816 10:02:43.330729    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:43.330874    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:43.330977    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:43.331073    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:43.331167    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:43.331293    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:43.331440    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:02:43.331451    3758 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0816 10:02:44.884634    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0816 10:02:44.884648    3758 main.go:141] libmachine: Checking connection to Docker...
	I0816 10:02:44.884655    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetURL
	I0816 10:02:44.884793    3758 main.go:141] libmachine: Docker is up and running!
	I0816 10:02:44.884799    3758 main.go:141] libmachine: Reticulating splines...
	I0816 10:02:44.884804    3758 client.go:171] duration metric: took 14.112778669s to LocalClient.Create
	I0816 10:02:44.884820    3758 start.go:167] duration metric: took 14.112810851s to libmachine.API.Create "ha-286000"
	I0816 10:02:44.884826    3758 start.go:293] postStartSetup for "ha-286000-m02" (driver="hyperkit")
	I0816 10:02:44.884832    3758 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 10:02:44.884843    3758 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:02:44.884991    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 10:02:44.885004    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:44.885101    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:44.885204    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:44.885335    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:44.885428    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/id_rsa Username:docker}
	I0816 10:02:44.925701    3758 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 10:02:44.929599    3758 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 10:02:44.929613    3758 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19461-1276/.minikube/addons for local assets ...
	I0816 10:02:44.929722    3758 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19461-1276/.minikube/files for local assets ...
	I0816 10:02:44.929908    3758 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> 18312.pem in /etc/ssl/certs
	I0816 10:02:44.929914    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> /etc/ssl/certs/18312.pem
	I0816 10:02:44.930119    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 10:02:44.940484    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem --> /etc/ssl/certs/18312.pem (1708 bytes)
	I0816 10:02:44.973200    3758 start.go:296] duration metric: took 88.36731ms for postStartSetup
	I0816 10:02:44.973230    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetConfigRaw
	I0816 10:02:44.973848    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetIP
	I0816 10:02:44.973996    3758 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:02:44.974351    3758 start.go:128] duration metric: took 14.23599979s to createHost
	I0816 10:02:44.974366    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:44.974448    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:44.974568    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:44.974660    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:44.974744    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:44.974844    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:44.974966    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:02:44.974973    3758 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 10:02:45.036267    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723827764.932365297
	
	I0816 10:02:45.036285    3758 fix.go:216] guest clock: 1723827764.932365297
	I0816 10:02:45.036291    3758 fix.go:229] Guest: 2024-08-16 10:02:44.932365297 -0700 PDT Remote: 2024-08-16 10:02:44.97436 -0700 PDT m=+56.899083401 (delta=-41.994703ms)
	I0816 10:02:45.036301    3758 fix.go:200] guest clock delta is within tolerance: -41.994703ms
	I0816 10:02:45.036306    3758 start.go:83] releasing machines lock for "ha-286000-m02", held for 14.298063452s
	I0816 10:02:45.036338    3758 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:02:45.036468    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetIP
	I0816 10:02:45.062334    3758 out.go:177] * Found network options:
	I0816 10:02:45.105996    3758 out.go:177]   - NO_PROXY=192.169.0.5
	W0816 10:02:45.128174    3758 proxy.go:119] fail to check proxy env: Error ip not in block
	I0816 10:02:45.128216    3758 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:02:45.128829    3758 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:02:45.128969    3758 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:02:45.129060    3758 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 10:02:45.129088    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	W0816 10:02:45.129115    3758 proxy.go:119] fail to check proxy env: Error ip not in block
	I0816 10:02:45.129222    3758 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0816 10:02:45.129232    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:45.129238    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:45.129370    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:45.129393    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:45.129500    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:45.129515    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:45.129631    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/id_rsa Username:docker}
	I0816 10:02:45.129647    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:45.129727    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/id_rsa Username:docker}
	W0816 10:02:45.164265    3758 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 10:02:45.164324    3758 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 10:02:45.211468    3758 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 10:02:45.211489    3758 start.go:495] detecting cgroup driver to use...
	I0816 10:02:45.211595    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 10:02:45.228144    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0816 10:02:45.237310    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0816 10:02:45.246272    3758 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0816 10:02:45.246316    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0816 10:02:45.255987    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 10:02:45.265400    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0816 10:02:45.274661    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 10:02:45.283628    3758 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 10:02:45.292648    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0816 10:02:45.302207    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0816 10:02:45.311314    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0816 10:02:45.320414    3758 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 10:02:45.328532    3758 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 10:02:45.337229    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:45.444954    3758 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0816 10:02:45.464569    3758 start.go:495] detecting cgroup driver to use...
	I0816 10:02:45.464652    3758 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0816 10:02:45.488292    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 10:02:45.500162    3758 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 10:02:45.561835    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 10:02:45.572027    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 10:02:45.582122    3758 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0816 10:02:45.603011    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 10:02:45.613488    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 10:02:45.628434    3758 ssh_runner.go:195] Run: which cri-dockerd
	I0816 10:02:45.631358    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0816 10:02:45.638496    3758 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0816 10:02:45.652676    3758 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0816 10:02:45.755971    3758 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0816 10:02:45.860533    3758 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0816 10:02:45.860559    3758 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0816 10:02:45.874570    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:45.976095    3758 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0816 10:02:48.459358    3758 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.483288325s)
	I0816 10:02:48.459417    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0816 10:02:48.469882    3758 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0816 10:02:48.484429    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0816 10:02:48.495038    3758 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0816 10:02:48.597129    3758 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0816 10:02:48.691412    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:48.797592    3758 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0816 10:02:48.812075    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0816 10:02:48.823307    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:48.918652    3758 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0816 10:02:48.980191    3758 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0816 10:02:48.980257    3758 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0816 10:02:48.984954    3758 start.go:563] Will wait 60s for crictl version
	I0816 10:02:48.985011    3758 ssh_runner.go:195] Run: which crictl
	I0816 10:02:48.988185    3758 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 10:02:49.015262    3758 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0816 10:02:49.015344    3758 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0816 10:02:49.032341    3758 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0816 10:02:49.078128    3758 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0816 10:02:49.137645    3758 out.go:177]   - env NO_PROXY=192.169.0.5
	I0816 10:02:49.176661    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetIP
	I0816 10:02:49.177129    3758 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0816 10:02:49.181778    3758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 10:02:49.191522    3758 mustload.go:65] Loading cluster: ha-286000
	I0816 10:02:49.191665    3758 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:02:49.191885    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:02:49.191909    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:02:49.200721    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51064
	I0816 10:02:49.201069    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:02:49.201413    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:02:49.201424    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:02:49.201648    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:02:49.201776    3758 main.go:141] libmachine: (ha-286000) Calling .GetState
	I0816 10:02:49.201862    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:49.201960    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:02:49.202920    3758 host.go:66] Checking if "ha-286000" exists ...
	I0816 10:02:49.203188    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:02:49.203205    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:02:49.211926    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51066
	I0816 10:02:49.212258    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:02:49.212635    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:02:49.212652    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:02:49.212860    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:02:49.212967    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:49.213056    3758 certs.go:68] Setting up /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000 for IP: 192.169.0.6
	I0816 10:02:49.213061    3758 certs.go:194] generating shared ca certs ...
	I0816 10:02:49.213071    3758 certs.go:226] acquiring lock for ca certs: {Name:mka549fbc467849834d2267da204028c49d88a3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:49.213229    3758 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key
	I0816 10:02:49.213305    3758 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key
	I0816 10:02:49.213314    3758 certs.go:256] generating profile certs ...
	I0816 10:02:49.213406    3758 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.key
	I0816 10:02:49.213429    3758 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.5eb44438
	I0816 10:02:49.213443    3758 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.5eb44438 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.254]
	I0816 10:02:49.263058    3758 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.5eb44438 ...
	I0816 10:02:49.263077    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.5eb44438: {Name:mk266d5e842df442c104f85113e06d171d5a89ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:49.263395    3758 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.5eb44438 ...
	I0816 10:02:49.263404    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.5eb44438: {Name:mk1428912a4c1edaafe55f9bff302c19ad337785 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:49.263623    3758 certs.go:381] copying /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.5eb44438 -> /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt
	I0816 10:02:49.263846    3758 certs.go:385] copying /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.5eb44438 -> /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key
	I0816 10:02:49.264093    3758 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key
	I0816 10:02:49.264103    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0816 10:02:49.264125    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0816 10:02:49.264143    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0816 10:02:49.264163    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0816 10:02:49.264180    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0816 10:02:49.264199    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0816 10:02:49.264223    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0816 10:02:49.264242    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0816 10:02:49.264331    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem (1338 bytes)
	W0816 10:02:49.264378    3758 certs.go:480] ignoring /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831_empty.pem, impossibly tiny 0 bytes
	I0816 10:02:49.264386    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 10:02:49.264418    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem (1078 bytes)
	I0816 10:02:49.264449    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem (1123 bytes)
	I0816 10:02:49.264477    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem (1679 bytes)
	I0816 10:02:49.264545    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem (1708 bytes)
	I0816 10:02:49.264593    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> /usr/share/ca-certificates/18312.pem
	I0816 10:02:49.264614    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:02:49.264634    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem -> /usr/share/ca-certificates/1831.pem
	I0816 10:02:49.264663    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:49.264814    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:49.264897    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:49.265014    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:49.265108    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:02:49.296192    3758 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0816 10:02:49.299577    3758 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0816 10:02:49.312765    3758 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0816 10:02:49.316551    3758 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0816 10:02:49.332167    3758 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0816 10:02:49.336199    3758 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0816 10:02:49.349166    3758 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0816 10:02:49.354242    3758 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0816 10:02:49.364119    3758 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0816 10:02:49.367349    3758 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0816 10:02:49.375423    3758 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0816 10:02:49.378717    3758 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0816 10:02:49.386918    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 10:02:49.408036    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0816 10:02:49.428163    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 10:02:49.448952    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 10:02:49.468986    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0816 10:02:49.489674    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0816 10:02:49.509597    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 10:02:49.530175    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 10:02:49.550578    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem --> /usr/share/ca-certificates/18312.pem (1708 bytes)
	I0816 10:02:49.571266    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 10:02:49.590995    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem --> /usr/share/ca-certificates/1831.pem (1338 bytes)
	I0816 10:02:49.611766    3758 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0816 10:02:49.625222    3758 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0816 10:02:49.638852    3758 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0816 10:02:49.653497    3758 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0816 10:02:49.667098    3758 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0816 10:02:49.680935    3758 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0816 10:02:49.695437    3758 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0816 10:02:49.708684    3758 ssh_runner.go:195] Run: openssl version
	I0816 10:02:49.712979    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18312.pem && ln -fs /usr/share/ca-certificates/18312.pem /etc/ssl/certs/18312.pem"
	I0816 10:02:49.721565    3758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18312.pem
	I0816 10:02:49.725513    3758 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 16:57 /usr/share/ca-certificates/18312.pem
	I0816 10:02:49.725580    3758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18312.pem
	I0816 10:02:49.730364    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18312.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 10:02:49.739184    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 10:02:49.747494    3758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:02:49.750923    3758 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 16:48 /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:02:49.750955    3758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:02:49.755195    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 10:02:49.763703    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1831.pem && ln -fs /usr/share/ca-certificates/1831.pem /etc/ssl/certs/1831.pem"
	I0816 10:02:49.772914    3758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1831.pem
	I0816 10:02:49.776377    3758 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 16:57 /usr/share/ca-certificates/1831.pem
	I0816 10:02:49.776421    3758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1831.pem
	I0816 10:02:49.780731    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1831.pem /etc/ssl/certs/51391683.0"
	I0816 10:02:49.789078    3758 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 10:02:49.792242    3758 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0816 10:02:49.792277    3758 kubeadm.go:934] updating node {m02 192.169.0.6 8443 v1.31.0 docker true true} ...
	I0816 10:02:49.792332    3758 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-286000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-286000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 10:02:49.792349    3758 kube-vip.go:115] generating kube-vip config ...
	I0816 10:02:49.792381    3758 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0816 10:02:49.804469    3758 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0816 10:02:49.804511    3758 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0816 10:02:49.804567    3758 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 10:02:49.812921    3758 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0816 10:02:49.812983    3758 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0816 10:02:49.821031    3758 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256 -> /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubelet
	I0816 10:02:49.821035    3758 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256 -> /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubeadm
	I0816 10:02:49.821039    3758 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubectl
	I0816 10:02:51.911279    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0816 10:02:51.911374    3758 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0816 10:02:51.915115    3758 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0816 10:02:51.915150    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0816 10:02:52.537289    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0816 10:02:52.537383    3758 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0816 10:02:52.540898    3758 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0816 10:02:52.540929    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0816 10:02:53.267467    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 10:02:53.278589    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0816 10:02:53.278731    3758 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0816 10:02:53.282055    3758 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0816 10:02:53.282073    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0816 10:02:53.498714    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0816 10:02:53.506112    3758 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0816 10:02:53.519672    3758 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 10:02:53.533551    3758 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0816 10:02:53.548024    3758 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0816 10:02:53.551124    3758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 10:02:53.560470    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:53.659270    3758 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 10:02:53.674625    3758 host.go:66] Checking if "ha-286000" exists ...
	I0816 10:02:53.674900    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:02:53.674923    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:02:53.683891    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51093
	I0816 10:02:53.684256    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:02:53.684641    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:02:53.684658    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:02:53.684885    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:02:53.685003    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:53.685084    3758 start.go:317] joinCluster: &{Name:ha-286000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clu
sterName:ha-286000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpira
tion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 10:02:53.685155    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0816 10:02:53.685167    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:53.685248    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:53.685339    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:53.685423    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:53.685503    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:02:53.765911    3758 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:02:53.765972    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token mpmdnf.6rzc0wpb01e0t6lz --discovery-token-ca-cert-hash sha256:995590413ea712c4f7db2cf849d28264ebc291e6cd53d741ce1d22c1daaa1602 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-286000-m02 --control-plane --apiserver-advertise-address=192.169.0.6 --apiserver-bind-port=8443"
	I0816 10:03:22.070391    3758 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token mpmdnf.6rzc0wpb01e0t6lz --discovery-token-ca-cert-hash sha256:995590413ea712c4f7db2cf849d28264ebc291e6cd53d741ce1d22c1daaa1602 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-286000-m02 --control-plane --apiserver-advertise-address=192.169.0.6 --apiserver-bind-port=8443": (28.304928658s)
	I0816 10:03:22.070414    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0816 10:03:22.427732    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-286000-m02 minikube.k8s.io/updated_at=2024_08_16T10_03_22_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd minikube.k8s.io/name=ha-286000 minikube.k8s.io/primary=false
	I0816 10:03:22.531211    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-286000-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0816 10:03:22.629050    3758 start.go:319] duration metric: took 28.944509117s to joinCluster
	I0816 10:03:22.629105    3758 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:03:22.629360    3758 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:03:22.652598    3758 out.go:177] * Verifying Kubernetes components...
	I0816 10:03:22.726472    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:03:22.952731    3758 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 10:03:22.975401    3758 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19461-1276/kubeconfig
	I0816 10:03:22.975621    3758 kapi.go:59] client config for ha-286000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.key", CAFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xa202f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0816 10:03:22.975663    3758 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0816 10:03:22.975836    3758 node_ready.go:35] waiting up to 6m0s for node "ha-286000-m02" to be "Ready" ...
	I0816 10:03:22.975886    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:22.975891    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:22.975897    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:22.975901    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:22.983180    3758 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0816 10:03:23.476314    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:23.476365    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:23.476380    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:23.476394    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:23.502874    3758 round_trippers.go:574] Response Status: 200 OK in 26 milliseconds
	I0816 10:03:23.977290    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:23.977304    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:23.977311    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:23.977315    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:23.979495    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:24.477835    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:24.477851    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:24.477858    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:24.477861    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:24.480701    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:24.977514    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:24.977568    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:24.977580    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:24.977588    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:24.980856    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:24.981299    3758 node_ready.go:53] node "ha-286000-m02" has status "Ready":"False"
	I0816 10:03:25.476235    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:25.476252    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:25.476260    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:25.476277    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:25.478567    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:25.976418    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:25.976469    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:25.976477    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:25.976480    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:25.978730    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:26.476423    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:26.476439    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:26.476445    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:26.476448    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:26.478424    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:26.975919    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:26.975934    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:26.975940    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:26.975945    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:26.977809    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:27.475966    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:27.475981    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:27.475987    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:27.475990    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:27.477769    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:27.478121    3758 node_ready.go:53] node "ha-286000-m02" has status "Ready":"False"
	I0816 10:03:27.977123    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:27.977139    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:27.977146    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:27.977150    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:27.979266    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:28.477140    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:28.477226    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:28.477254    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:28.477264    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:28.480386    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:28.977368    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:28.977407    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:28.977420    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:28.977427    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:28.980479    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:29.477521    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:29.477545    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:29.477556    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:29.477565    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:29.481179    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:29.481510    3758 node_ready.go:53] node "ha-286000-m02" has status "Ready":"False"
	I0816 10:03:29.975906    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:29.975932    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:29.975943    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:29.975949    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:29.978633    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:30.477171    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:30.477189    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:30.477198    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:30.477201    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:30.480005    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:30.977947    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:30.977973    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:30.977993    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:30.977998    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:30.982219    3758 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 10:03:31.476252    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:31.476327    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:31.476341    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:31.476347    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:31.479417    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:31.975987    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:31.976012    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:31.976023    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:31.976029    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:31.979409    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:31.979880    3758 node_ready.go:53] node "ha-286000-m02" has status "Ready":"False"
	I0816 10:03:32.477180    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:32.477207    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:32.477229    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:32.477240    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:32.480686    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:32.976225    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:32.976250    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:32.976261    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:32.976269    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:32.979533    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:33.475956    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:33.475980    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:33.475991    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:33.475997    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:33.479025    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:33.977111    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:33.977185    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:33.977198    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:33.977204    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:33.980426    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:33.980922    3758 node_ready.go:53] node "ha-286000-m02" has status "Ready":"False"
	I0816 10:03:34.476455    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:34.476470    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:34.476477    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:34.476481    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:34.478517    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:34.976996    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:34.977037    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:34.977049    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:34.977084    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:34.980500    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:35.476045    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:35.476068    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:35.476079    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:35.476086    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:35.479058    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:35.975920    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:35.975944    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:35.975956    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:35.975962    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:35.979071    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:36.477845    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:36.477872    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:36.477885    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:36.477946    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:36.481401    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:36.481890    3758 node_ready.go:53] node "ha-286000-m02" has status "Ready":"False"
	I0816 10:03:36.977033    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:36.977057    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:36.977069    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:36.977076    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:36.980476    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:37.477095    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:37.477120    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:37.477133    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:37.477141    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:37.480485    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:37.976303    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:37.976320    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:37.976343    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:37.976348    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:37.978551    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:38.476492    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:38.476517    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.476528    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.476536    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:38.480190    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:38.976091    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:38.976114    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.976124    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.976129    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:38.979741    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:38.980051    3758 node_ready.go:49] node "ha-286000-m02" has status "Ready":"True"
	I0816 10:03:38.980066    3758 node_ready.go:38] duration metric: took 16.004516347s for node "ha-286000-m02" to be "Ready" ...
	I0816 10:03:38.980077    3758 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 10:03:38.980127    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0816 10:03:38.980135    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.980142    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.980152    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:38.982937    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:38.987546    3758 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-2kqjf" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:38.987591    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-2kqjf
	I0816 10:03:38.987597    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.987603    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.987607    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:38.989339    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:38.989748    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:38.989758    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.989764    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.989768    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:38.991324    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:38.991660    3758 pod_ready.go:93] pod "coredns-6f6b679f8f-2kqjf" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:38.991671    3758 pod_ready.go:82] duration metric: took 4.111381ms for pod "coredns-6f6b679f8f-2kqjf" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:38.991678    3758 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-rfbz7" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:38.991708    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-rfbz7
	I0816 10:03:38.991713    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.991718    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.991721    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:38.993245    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:38.993624    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:38.993631    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.993640    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.993644    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:38.995095    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:38.995419    3758 pod_ready.go:93] pod "coredns-6f6b679f8f-rfbz7" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:38.995427    3758 pod_ready.go:82] duration metric: took 3.744646ms for pod "coredns-6f6b679f8f-rfbz7" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:38.995433    3758 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:38.995467    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-286000
	I0816 10:03:38.995472    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.995478    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.995482    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:38.997007    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:38.997390    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:38.997397    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.997403    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.997406    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:38.998839    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:38.999151    3758 pod_ready.go:93] pod "etcd-ha-286000" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:38.999160    3758 pod_ready.go:82] duration metric: took 3.721879ms for pod "etcd-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:38.999165    3758 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:38.999202    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-286000-m02
	I0816 10:03:38.999207    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.999213    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.999217    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:39.001189    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:39.001547    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:39.001554    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:39.001559    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:39.001563    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:39.003004    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:39.003290    3758 pod_ready.go:93] pod "etcd-ha-286000-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:39.003298    3758 pod_ready.go:82] duration metric: took 4.127582ms for pod "etcd-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:39.003307    3758 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:39.177834    3758 request.go:632] Waited for 174.443582ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-286000
	I0816 10:03:39.177948    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-286000
	I0816 10:03:39.177963    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:39.177974    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:39.177981    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:39.181371    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:39.376438    3758 request.go:632] Waited for 194.538822ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:39.376562    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:39.376574    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:39.376586    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:39.376596    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:39.379989    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:39.380425    3758 pod_ready.go:93] pod "kube-apiserver-ha-286000" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:39.380437    3758 pod_ready.go:82] duration metric: took 377.131323ms for pod "kube-apiserver-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:39.380446    3758 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:39.576633    3758 request.go:632] Waited for 196.095353ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-286000-m02
	I0816 10:03:39.576687    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-286000-m02
	I0816 10:03:39.576696    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:39.576769    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:39.576793    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:39.580164    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:39.776642    3758 request.go:632] Waited for 195.66937ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:39.776725    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:39.776735    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:39.776746    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:39.776752    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:39.780273    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:39.780714    3758 pod_ready.go:93] pod "kube-apiserver-ha-286000-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:39.780723    3758 pod_ready.go:82] duration metric: took 400.274102ms for pod "kube-apiserver-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:39.780730    3758 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:39.977598    3758 request.go:632] Waited for 196.714211ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-286000
	I0816 10:03:39.977650    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-286000
	I0816 10:03:39.977659    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:39.977670    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:39.977679    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:39.981079    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:40.177460    3758 request.go:632] Waited for 195.94045ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:40.177528    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:40.177589    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:40.177603    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:40.177609    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:40.180971    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:40.181992    3758 pod_ready.go:93] pod "kube-controller-manager-ha-286000" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:40.182036    3758 pod_ready.go:82] duration metric: took 401.277819ms for pod "kube-controller-manager-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:40.182047    3758 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:40.376258    3758 request.go:632] Waited for 194.111468ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-286000-m02
	I0816 10:03:40.376327    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-286000-m02
	I0816 10:03:40.376336    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:40.376347    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:40.376355    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:40.379476    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:40.576506    3758 request.go:632] Waited for 196.409077ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:40.576552    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:40.576560    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:40.576571    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:40.576580    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:40.579964    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:40.580641    3758 pod_ready.go:93] pod "kube-controller-manager-ha-286000-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:40.580654    3758 pod_ready.go:82] duration metric: took 398.607786ms for pod "kube-controller-manager-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:40.580663    3758 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pt669" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:40.778194    3758 request.go:632] Waited for 197.469682ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pt669
	I0816 10:03:40.778324    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pt669
	I0816 10:03:40.778333    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:40.778345    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:40.778352    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:40.781925    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:40.976623    3758 request.go:632] Waited for 194.04689ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:40.976752    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:40.976761    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:40.976772    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:40.976780    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:40.980022    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:40.980473    3758 pod_ready.go:93] pod "kube-proxy-pt669" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:40.980485    3758 pod_ready.go:82] duration metric: took 399.822578ms for pod "kube-proxy-pt669" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:40.980494    3758 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-w4nt2" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:41.177361    3758 request.go:632] Waited for 196.811255ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w4nt2
	I0816 10:03:41.177533    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w4nt2
	I0816 10:03:41.177545    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:41.177556    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:41.177563    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:41.181543    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:41.377692    3758 request.go:632] Waited for 195.583238ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:41.377791    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:41.377802    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:41.377812    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:41.377819    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:41.381134    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:41.381716    3758 pod_ready.go:93] pod "kube-proxy-w4nt2" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:41.381726    3758 pod_ready.go:82] duration metric: took 401.234529ms for pod "kube-proxy-w4nt2" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:41.381733    3758 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:41.577020    3758 request.go:632] Waited for 195.246357ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-286000
	I0816 10:03:41.577073    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-286000
	I0816 10:03:41.577125    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:41.577138    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:41.577147    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:41.580051    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:41.777879    3758 request.go:632] Waited for 197.366145ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:41.777974    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:41.777983    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:41.777995    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:41.778003    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:41.781434    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:41.782012    3758 pod_ready.go:93] pod "kube-scheduler-ha-286000" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:41.782023    3758 pod_ready.go:82] duration metric: took 400.292624ms for pod "kube-scheduler-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:41.782051    3758 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:41.976356    3758 request.go:632] Waited for 194.23341ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-286000-m02
	I0816 10:03:41.976463    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-286000-m02
	I0816 10:03:41.976473    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:41.976485    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:41.976494    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:41.980088    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:42.176952    3758 request.go:632] Waited for 196.161737ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:42.177009    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:42.177018    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:42.177026    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:42.177083    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:42.182888    3758 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0816 10:03:42.183179    3758 pod_ready.go:93] pod "kube-scheduler-ha-286000-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:42.183188    3758 pod_ready.go:82] duration metric: took 401.133714ms for pod "kube-scheduler-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:42.183203    3758 pod_ready.go:39] duration metric: took 3.203173842s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 10:03:42.183221    3758 api_server.go:52] waiting for apiserver process to appear ...
	I0816 10:03:42.183274    3758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 10:03:42.195671    3758 api_server.go:72] duration metric: took 19.566910589s to wait for apiserver process to appear ...
	I0816 10:03:42.195682    3758 api_server.go:88] waiting for apiserver healthz status ...
	I0816 10:03:42.195698    3758 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0816 10:03:42.198837    3758 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0816 10:03:42.198870    3758 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0816 10:03:42.198874    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:42.198880    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:42.198884    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:42.199389    3758 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0816 10:03:42.199468    3758 api_server.go:141] control plane version: v1.31.0
	I0816 10:03:42.199478    3758 api_server.go:131] duration metric: took 3.791893ms to wait for apiserver health ...
	I0816 10:03:42.199486    3758 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 10:03:42.378048    3758 request.go:632] Waited for 178.500938ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0816 10:03:42.378156    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0816 10:03:42.378166    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:42.378178    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:42.378186    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:42.382776    3758 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 10:03:42.386068    3758 system_pods.go:59] 17 kube-system pods found
	I0816 10:03:42.386083    3758 system_pods.go:61] "coredns-6f6b679f8f-2kqjf" [be18cd09-3e3b-4749-bf29-7001a879f593] Running
	I0816 10:03:42.386087    3758 system_pods.go:61] "coredns-6f6b679f8f-rfbz7" [a593e27a-e38b-46c7-a603-44963c31c095] Running
	I0816 10:03:42.386090    3758 system_pods.go:61] "etcd-ha-286000" [c45bc623-befe-4e88-8da1-bb4f7d7be5b2] Running
	I0816 10:03:42.386093    3758 system_pods.go:61] "etcd-ha-286000-m02" [e14fbe0c-4527-4cbb-8c13-01c0b32a8d15] Running
	I0816 10:03:42.386096    3758 system_pods.go:61] "kindnet-t9kjf" [20c57f28-5e86-44a4-8a2a-07e1e240abf2] Running
	I0816 10:03:42.386099    3758 system_pods.go:61] "kindnet-whqxb" [1cc5291b-52ea-44b5-b87c-607d46b5281a] Running
	I0816 10:03:42.386102    3758 system_pods.go:61] "kube-apiserver-ha-286000" [c0d298a9-931b-4084-9e5f-01a88de27456] Running
	I0816 10:03:42.386104    3758 system_pods.go:61] "kube-apiserver-ha-286000-m02" [3666c131-2b88-483a-ab8c-b18cb8db7d6f] Running
	I0816 10:03:42.386120    3758 system_pods.go:61] "kube-controller-manager-ha-286000" [5a4f4c5d-d89a-4e5f-b022-479ca31f64cd] Running
	I0816 10:03:42.386127    3758 system_pods.go:61] "kube-controller-manager-ha-286000-m02" [a347c3d4-fa47-49ea-93a0-83087017a2bd] Running
	I0816 10:03:42.386130    3758 system_pods.go:61] "kube-proxy-pt669" [798c956f-8f08-465a-b0d9-90b65f703f80] Running
	I0816 10:03:42.386139    3758 system_pods.go:61] "kube-proxy-w4nt2" [79ce2248-f8fd-4a3b-aeb7-f81d7ad64564] Running
	I0816 10:03:42.386143    3758 system_pods.go:61] "kube-scheduler-ha-286000" [b4af80e0-cbb0-4257-a212-3585faff0875] Running
	I0816 10:03:42.386145    3758 system_pods.go:61] "kube-scheduler-ha-286000-m02" [89920e93-c959-47ec-8a2c-f2b63e8c3d3a] Running
	I0816 10:03:42.386153    3758 system_pods.go:61] "kube-vip-ha-286000" [b0f85be2-2afb-44b0-a701-161b24529349] Running
	I0816 10:03:42.386156    3758 system_pods.go:61] "kube-vip-ha-286000-m02" [4982dc53-946d-4f75-8e9e-452ef39ff2fa] Running
	I0816 10:03:42.386159    3758 system_pods.go:61] "storage-provisioner" [4805d53b-2db3-4092-a3f2-d4a854e93adc] Running
	I0816 10:03:42.386163    3758 system_pods.go:74] duration metric: took 186.675963ms to wait for pod list to return data ...
	I0816 10:03:42.386170    3758 default_sa.go:34] waiting for default service account to be created ...
	I0816 10:03:42.577030    3758 request.go:632] Waited for 190.795237ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0816 10:03:42.577183    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0816 10:03:42.577198    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:42.577209    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:42.577215    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:42.581020    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:42.581204    3758 default_sa.go:45] found service account: "default"
	I0816 10:03:42.581216    3758 default_sa.go:55] duration metric: took 195.045213ms for default service account to be created ...
	I0816 10:03:42.581224    3758 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 10:03:42.777160    3758 request.go:632] Waited for 195.848894ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0816 10:03:42.777252    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0816 10:03:42.777268    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:42.777285    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:42.777303    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:42.781637    3758 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 10:03:42.784962    3758 system_pods.go:86] 17 kube-system pods found
	I0816 10:03:42.784974    3758 system_pods.go:89] "coredns-6f6b679f8f-2kqjf" [be18cd09-3e3b-4749-bf29-7001a879f593] Running
	I0816 10:03:42.784978    3758 system_pods.go:89] "coredns-6f6b679f8f-rfbz7" [a593e27a-e38b-46c7-a603-44963c31c095] Running
	I0816 10:03:42.784981    3758 system_pods.go:89] "etcd-ha-286000" [c45bc623-befe-4e88-8da1-bb4f7d7be5b2] Running
	I0816 10:03:42.784984    3758 system_pods.go:89] "etcd-ha-286000-m02" [e14fbe0c-4527-4cbb-8c13-01c0b32a8d15] Running
	I0816 10:03:42.784987    3758 system_pods.go:89] "kindnet-t9kjf" [20c57f28-5e86-44a4-8a2a-07e1e240abf2] Running
	I0816 10:03:42.784990    3758 system_pods.go:89] "kindnet-whqxb" [1cc5291b-52ea-44b5-b87c-607d46b5281a] Running
	I0816 10:03:42.784992    3758 system_pods.go:89] "kube-apiserver-ha-286000" [c0d298a9-931b-4084-9e5f-01a88de27456] Running
	I0816 10:03:42.784995    3758 system_pods.go:89] "kube-apiserver-ha-286000-m02" [3666c131-2b88-483a-ab8c-b18cb8db7d6f] Running
	I0816 10:03:42.784997    3758 system_pods.go:89] "kube-controller-manager-ha-286000" [5a4f4c5d-d89a-4e5f-b022-479ca31f64cd] Running
	I0816 10:03:42.785001    3758 system_pods.go:89] "kube-controller-manager-ha-286000-m02" [a347c3d4-fa47-49ea-93a0-83087017a2bd] Running
	I0816 10:03:42.785003    3758 system_pods.go:89] "kube-proxy-pt669" [798c956f-8f08-465a-b0d9-90b65f703f80] Running
	I0816 10:03:42.785006    3758 system_pods.go:89] "kube-proxy-w4nt2" [79ce2248-f8fd-4a3b-aeb7-f81d7ad64564] Running
	I0816 10:03:42.785009    3758 system_pods.go:89] "kube-scheduler-ha-286000" [b4af80e0-cbb0-4257-a212-3585faff0875] Running
	I0816 10:03:42.785011    3758 system_pods.go:89] "kube-scheduler-ha-286000-m02" [89920e93-c959-47ec-8a2c-f2b63e8c3d3a] Running
	I0816 10:03:42.785015    3758 system_pods.go:89] "kube-vip-ha-286000" [b0f85be2-2afb-44b0-a701-161b24529349] Running
	I0816 10:03:42.785017    3758 system_pods.go:89] "kube-vip-ha-286000-m02" [4982dc53-946d-4f75-8e9e-452ef39ff2fa] Running
	I0816 10:03:42.785023    3758 system_pods.go:89] "storage-provisioner" [4805d53b-2db3-4092-a3f2-d4a854e93adc] Running
	I0816 10:03:42.785028    3758 system_pods.go:126] duration metric: took 203.80313ms to wait for k8s-apps to be running ...
	I0816 10:03:42.785034    3758 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 10:03:42.785088    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 10:03:42.796381    3758 system_svc.go:56] duration metric: took 11.343439ms WaitForService to wait for kubelet
	I0816 10:03:42.796397    3758 kubeadm.go:582] duration metric: took 20.16764951s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 10:03:42.796409    3758 node_conditions.go:102] verifying NodePressure condition ...
	I0816 10:03:42.977594    3758 request.go:632] Waited for 181.143005ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0816 10:03:42.977695    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0816 10:03:42.977706    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:42.977719    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:42.977724    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:42.981373    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:42.981988    3758 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 10:03:42.982013    3758 node_conditions.go:123] node cpu capacity is 2
	I0816 10:03:42.982039    3758 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 10:03:42.982043    3758 node_conditions.go:123] node cpu capacity is 2
	I0816 10:03:42.982046    3758 node_conditions.go:105] duration metric: took 185.637747ms to run NodePressure ...
	I0816 10:03:42.982055    3758 start.go:241] waiting for startup goroutines ...
	I0816 10:03:42.982079    3758 start.go:255] writing updated cluster config ...
	I0816 10:03:43.003740    3758 out.go:201] 
	I0816 10:03:43.024885    3758 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:03:43.024976    3758 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:03:43.047776    3758 out.go:177] * Starting "ha-286000-m03" control-plane node in "ha-286000" cluster
	I0816 10:03:43.137621    3758 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 10:03:43.137655    3758 cache.go:56] Caching tarball of preloaded images
	I0816 10:03:43.137861    3758 preload.go:172] Found /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0816 10:03:43.137884    3758 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 10:03:43.138022    3758 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:03:43.159475    3758 start.go:360] acquireMachinesLock for ha-286000-m03: {Name:mk0510f0432b1e24fd07d899155e85dff8756d5a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:03:43.159592    3758 start.go:364] duration metric: took 91.442µs to acquireMachinesLock for "ha-286000-m03"
	I0816 10:03:43.159625    3758 start.go:93] Provisioning new machine with config: &{Name:ha-286000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:ha-286000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ing
ress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:03:43.159736    3758 start.go:125] createHost starting for "m03" (driver="hyperkit")
	I0816 10:03:43.180569    3758 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0816 10:03:43.180719    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:03:43.180762    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:03:43.191087    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51098
	I0816 10:03:43.191558    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:03:43.191889    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:03:43.191899    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:03:43.192106    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:03:43.192302    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetMachineName
	I0816 10:03:43.192394    3758 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:03:43.192506    3758 start.go:159] libmachine.API.Create for "ha-286000" (driver="hyperkit")
	I0816 10:03:43.192526    3758 client.go:168] LocalClient.Create starting
	I0816 10:03:43.192559    3758 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem
	I0816 10:03:43.192606    3758 main.go:141] libmachine: Decoding PEM data...
	I0816 10:03:43.192620    3758 main.go:141] libmachine: Parsing certificate...
	I0816 10:03:43.192675    3758 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem
	I0816 10:03:43.192704    3758 main.go:141] libmachine: Decoding PEM data...
	I0816 10:03:43.192714    3758 main.go:141] libmachine: Parsing certificate...
	I0816 10:03:43.192726    3758 main.go:141] libmachine: Running pre-create checks...
	I0816 10:03:43.192731    3758 main.go:141] libmachine: (ha-286000-m03) Calling .PreCreateCheck
	I0816 10:03:43.192813    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:43.192833    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetConfigRaw
	I0816 10:03:43.202038    3758 main.go:141] libmachine: Creating machine...
	I0816 10:03:43.202062    3758 main.go:141] libmachine: (ha-286000-m03) Calling .Create
	I0816 10:03:43.202302    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:43.202574    3758 main.go:141] libmachine: (ha-286000-m03) DBG | I0816 10:03:43.202284    3848 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19461-1276/.minikube
	I0816 10:03:43.202705    3758 main.go:141] libmachine: (ha-286000-m03) Downloading /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19461-1276/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0816 10:03:43.529965    3758 main.go:141] libmachine: (ha-286000-m03) DBG | I0816 10:03:43.529902    3848 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/id_rsa...
	I0816 10:03:43.631057    3758 main.go:141] libmachine: (ha-286000-m03) DBG | I0816 10:03:43.630965    3848 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/ha-286000-m03.rawdisk...
	I0816 10:03:43.631074    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Writing magic tar header
	I0816 10:03:43.631083    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Writing SSH key tar header
	I0816 10:03:43.631711    3758 main.go:141] libmachine: (ha-286000-m03) DBG | I0816 10:03:43.631681    3848 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03 ...
	I0816 10:03:44.155270    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:44.155286    3758 main.go:141] libmachine: (ha-286000-m03) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/hyperkit.pid
	I0816 10:03:44.155318    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Using UUID 408efb18-ff91-428f-b414-6eb0d982a779
	I0816 10:03:44.181981    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Generated MAC 8a:e:de:5b:b5:8b
	I0816 10:03:44.182010    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000
	I0816 10:03:44.182059    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"408efb18-ff91-428f-b414-6eb0d982a779", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 10:03:44.182101    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"408efb18-ff91-428f-b414-6eb0d982a779", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 10:03:44.182228    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "408efb18-ff91-428f-b414-6eb0d982a779", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/ha-286000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/bzimage,/Users/jenkins/minikube-integration/19461-1276/.minikube/machine
s/ha-286000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000"}
	I0816 10:03:44.182306    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 408efb18-ff91-428f-b414-6eb0d982a779 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/ha-286000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/bzimage,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000"
	I0816 10:03:44.182332    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0816 10:03:44.185479    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 DEBUG: hyperkit: Pid is 3849
	I0816 10:03:44.185900    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Attempt 0
	I0816 10:03:44.185912    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:44.186025    3758 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid from json: 3849
	I0816 10:03:44.186994    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Searching for 8a:e:de:5b:b5:8b in /var/db/dhcpd_leases ...
	I0816 10:03:44.187062    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0816 10:03:44.187079    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0d7b0}
	I0816 10:03:44.187114    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:03:44.187142    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:03:44.187168    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:03:44.187185    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:03:44.193352    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0816 10:03:44.201781    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0816 10:03:44.202595    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:03:44.202610    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:03:44.202637    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:03:44.202653    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:03:44.588441    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0816 10:03:44.588458    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0816 10:03:44.703100    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:03:44.703117    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:03:44.703125    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:03:44.703135    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:03:44.703952    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0816 10:03:44.703963    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0816 10:03:46.188345    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Attempt 1
	I0816 10:03:46.188360    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:46.188480    3758 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid from json: 3849
	I0816 10:03:46.189255    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Searching for 8a:e:de:5b:b5:8b in /var/db/dhcpd_leases ...
	I0816 10:03:46.189306    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0816 10:03:46.189321    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0d7b0}
	I0816 10:03:46.189335    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:03:46.189352    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:03:46.189359    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:03:46.189385    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:03:48.190823    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Attempt 2
	I0816 10:03:48.190838    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:48.190916    3758 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid from json: 3849
	I0816 10:03:48.191692    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Searching for 8a:e:de:5b:b5:8b in /var/db/dhcpd_leases ...
	I0816 10:03:48.191747    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0816 10:03:48.191757    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0d7b0}
	I0816 10:03:48.191779    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:03:48.191787    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:03:48.191803    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:03:48.191815    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:03:50.191897    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Attempt 3
	I0816 10:03:50.191913    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:50.191977    3758 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid from json: 3849
	I0816 10:03:50.192744    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Searching for 8a:e:de:5b:b5:8b in /var/db/dhcpd_leases ...
	I0816 10:03:50.192793    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0816 10:03:50.192802    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0d7b0}
	I0816 10:03:50.192811    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:03:50.192820    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:03:50.192836    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:03:50.192850    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:03:50.310705    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:50 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0816 10:03:50.310770    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:50 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0816 10:03:50.310783    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:50 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0816 10:03:50.334054    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:50 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0816 10:03:52.193443    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Attempt 4
	I0816 10:03:52.193459    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:52.193535    3758 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid from json: 3849
	I0816 10:03:52.194319    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Searching for 8a:e:de:5b:b5:8b in /var/db/dhcpd_leases ...
	I0816 10:03:52.194367    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0816 10:03:52.194379    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0d7b0}
	I0816 10:03:52.194390    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:03:52.194399    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:03:52.194408    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:03:52.194419    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:03:54.195434    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Attempt 5
	I0816 10:03:54.195456    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:54.195540    3758 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid from json: 3849
	I0816 10:03:54.196406    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Searching for 8a:e:de:5b:b5:8b in /var/db/dhcpd_leases ...
	I0816 10:03:54.196510    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Found 6 entries in /var/db/dhcpd_leases!
	I0816 10:03:54.196530    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66c0d7f9}
	I0816 10:03:54.196551    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Found match: 8a:e:de:5b:b5:8b
	I0816 10:03:54.196553    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetConfigRaw
	I0816 10:03:54.196565    3758 main.go:141] libmachine: (ha-286000-m03) DBG | IP: 192.169.0.7
	I0816 10:03:54.197236    3758 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:03:54.197351    3758 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:03:54.197441    3758 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0816 10:03:54.197454    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetState
	I0816 10:03:54.197541    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:54.197611    3758 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid from json: 3849
	I0816 10:03:54.198439    3758 main.go:141] libmachine: Detecting operating system of created instance...
	I0816 10:03:54.198446    3758 main.go:141] libmachine: Waiting for SSH to be available...
	I0816 10:03:54.198450    3758 main.go:141] libmachine: Getting to WaitForSSH function...
	I0816 10:03:54.198454    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:54.198532    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:54.198612    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:54.198695    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:54.198783    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:54.198892    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:03:54.199063    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:03:54.199071    3758 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0816 10:03:55.266165    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 10:03:55.266179    3758 main.go:141] libmachine: Detecting the provisioner...
	I0816 10:03:55.266185    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:55.266318    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:55.266413    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.266507    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.266601    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:55.266731    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:03:55.266879    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:03:55.266886    3758 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0816 10:03:55.332778    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0816 10:03:55.332822    3758 main.go:141] libmachine: found compatible host: buildroot
	I0816 10:03:55.332828    3758 main.go:141] libmachine: Provisioning with buildroot...
	I0816 10:03:55.332834    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetMachineName
	I0816 10:03:55.332971    3758 buildroot.go:166] provisioning hostname "ha-286000-m03"
	I0816 10:03:55.332982    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetMachineName
	I0816 10:03:55.333079    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:55.333186    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:55.333277    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.333375    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.333463    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:55.333593    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:03:55.333737    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:03:55.333746    3758 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-286000-m03 && echo "ha-286000-m03" | sudo tee /etc/hostname
	I0816 10:03:55.411701    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-286000-m03
	
	I0816 10:03:55.411715    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:55.411851    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:55.411965    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.412057    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.412140    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:55.412274    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:03:55.412418    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:03:55.412429    3758 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-286000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-286000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-286000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 10:03:55.485102    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 10:03:55.485118    3758 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19461-1276/.minikube CaCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19461-1276/.minikube}
	I0816 10:03:55.485127    3758 buildroot.go:174] setting up certificates
	I0816 10:03:55.485134    3758 provision.go:84] configureAuth start
	I0816 10:03:55.485141    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetMachineName
	I0816 10:03:55.485284    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetIP
	I0816 10:03:55.485375    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:55.485445    3758 provision.go:143] copyHostCerts
	I0816 10:03:55.485474    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem
	I0816 10:03:55.485527    3758 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem, removing ...
	I0816 10:03:55.485533    3758 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem
	I0816 10:03:55.485654    3758 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem (1123 bytes)
	I0816 10:03:55.485864    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem
	I0816 10:03:55.485895    3758 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem, removing ...
	I0816 10:03:55.485900    3758 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem
	I0816 10:03:55.486003    3758 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem (1679 bytes)
	I0816 10:03:55.486172    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem
	I0816 10:03:55.486206    3758 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem, removing ...
	I0816 10:03:55.486215    3758 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem
	I0816 10:03:55.486286    3758 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem (1078 bytes)
	I0816 10:03:55.486435    3758 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem org=jenkins.ha-286000-m03 san=[127.0.0.1 192.169.0.7 ha-286000-m03 localhost minikube]
	I0816 10:03:55.572803    3758 provision.go:177] copyRemoteCerts
	I0816 10:03:55.572855    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 10:03:55.572869    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:55.573015    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:55.573114    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.573208    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:55.573304    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/id_rsa Username:docker}
	I0816 10:03:55.612104    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0816 10:03:55.612186    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0816 10:03:55.632484    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0816 10:03:55.632548    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 10:03:55.652677    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0816 10:03:55.652752    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0816 10:03:55.672555    3758 provision.go:87] duration metric: took 187.4165ms to configureAuth
	I0816 10:03:55.672568    3758 buildroot.go:189] setting minikube options for container-runtime
	I0816 10:03:55.672735    3758 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:03:55.672748    3758 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:03:55.672889    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:55.672982    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:55.673071    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.673157    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.673245    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:55.673371    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:03:55.673496    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:03:55.673504    3758 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0816 10:03:55.738053    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0816 10:03:55.738064    3758 buildroot.go:70] root file system type: tmpfs
	I0816 10:03:55.738156    3758 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0816 10:03:55.738167    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:55.738306    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:55.738397    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.738489    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.738573    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:55.738694    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:03:55.738841    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:03:55.738890    3758 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0816 10:03:55.813774    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0816 10:03:55.813790    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:55.813924    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:55.814019    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.814103    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.814193    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:55.814320    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:03:55.814470    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:03:55.814484    3758 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0816 10:03:57.356529    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0816 10:03:57.356544    3758 main.go:141] libmachine: Checking connection to Docker...
	I0816 10:03:57.356549    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetURL
	I0816 10:03:57.356692    3758 main.go:141] libmachine: Docker is up and running!
	I0816 10:03:57.356700    3758 main.go:141] libmachine: Reticulating splines...
	I0816 10:03:57.356705    3758 client.go:171] duration metric: took 14.164443579s to LocalClient.Create
	I0816 10:03:57.356717    3758 start.go:167] duration metric: took 14.164482312s to libmachine.API.Create "ha-286000"
	I0816 10:03:57.356727    3758 start.go:293] postStartSetup for "ha-286000-m03" (driver="hyperkit")
	I0816 10:03:57.356734    3758 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 10:03:57.356744    3758 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:03:57.356919    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 10:03:57.356935    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:57.357030    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:57.357130    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:57.357273    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:57.357390    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/id_rsa Username:docker}
	I0816 10:03:57.398231    3758 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 10:03:57.402472    3758 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 10:03:57.402483    3758 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19461-1276/.minikube/addons for local assets ...
	I0816 10:03:57.402571    3758 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19461-1276/.minikube/files for local assets ...
	I0816 10:03:57.402723    3758 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> 18312.pem in /etc/ssl/certs
	I0816 10:03:57.402729    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> /etc/ssl/certs/18312.pem
	I0816 10:03:57.402895    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 10:03:57.412226    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem --> /etc/ssl/certs/18312.pem (1708 bytes)
	I0816 10:03:57.443242    3758 start.go:296] duration metric: took 86.507779ms for postStartSetup
	I0816 10:03:57.443268    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetConfigRaw
	I0816 10:03:57.443871    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetIP
	I0816 10:03:57.444031    3758 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:03:57.444386    3758 start.go:128] duration metric: took 14.284913269s to createHost
	I0816 10:03:57.444401    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:57.444490    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:57.444568    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:57.444658    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:57.444732    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:57.444842    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:03:57.444963    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:03:57.444971    3758 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 10:03:57.509634    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723827837.586061636
	
	I0816 10:03:57.509648    3758 fix.go:216] guest clock: 1723827837.586061636
	I0816 10:03:57.509653    3758 fix.go:229] Guest: 2024-08-16 10:03:57.586061636 -0700 PDT Remote: 2024-08-16 10:03:57.444395 -0700 PDT m=+129.370490857 (delta=141.666636ms)
	I0816 10:03:57.509665    3758 fix.go:200] guest clock delta is within tolerance: 141.666636ms
	I0816 10:03:57.509669    3758 start.go:83] releasing machines lock for "ha-286000-m03", held for 14.350339851s
	I0816 10:03:57.509691    3758 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:03:57.509832    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetIP
	I0816 10:03:57.542277    3758 out.go:177] * Found network options:
	I0816 10:03:57.562266    3758 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6
	W0816 10:03:57.583040    3758 proxy.go:119] fail to check proxy env: Error ip not in block
	W0816 10:03:57.583073    3758 proxy.go:119] fail to check proxy env: Error ip not in block
	I0816 10:03:57.583092    3758 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:03:57.583964    3758 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:03:57.584310    3758 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:03:57.584469    3758 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 10:03:57.584522    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	W0816 10:03:57.584570    3758 proxy.go:119] fail to check proxy env: Error ip not in block
	W0816 10:03:57.584596    3758 proxy.go:119] fail to check proxy env: Error ip not in block
	I0816 10:03:57.584708    3758 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0816 10:03:57.584735    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:57.584813    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:57.584995    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:57.585023    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:57.585164    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:57.585195    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:57.585338    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/id_rsa Username:docker}
	I0816 10:03:57.585406    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:57.585566    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/id_rsa Username:docker}
	W0816 10:03:57.623481    3758 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 10:03:57.623541    3758 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 10:03:57.670278    3758 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 10:03:57.670298    3758 start.go:495] detecting cgroup driver to use...
	I0816 10:03:57.670408    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 10:03:57.686134    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0816 10:03:57.694681    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0816 10:03:57.703134    3758 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0816 10:03:57.703198    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0816 10:03:57.711736    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 10:03:57.720041    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0816 10:03:57.728421    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 10:03:57.737252    3758 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 10:03:57.746356    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0816 10:03:57.755117    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0816 10:03:57.763962    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0816 10:03:57.772639    3758 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 10:03:57.780684    3758 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 10:03:57.788938    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:03:57.893932    3758 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0816 10:03:57.913150    3758 start.go:495] detecting cgroup driver to use...
	I0816 10:03:57.913224    3758 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0816 10:03:57.932274    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 10:03:57.945000    3758 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 10:03:57.965222    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 10:03:57.977460    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 10:03:57.988259    3758 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0816 10:03:58.006552    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 10:03:58.017261    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 10:03:58.032545    3758 ssh_runner.go:195] Run: which cri-dockerd
	I0816 10:03:58.035464    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0816 10:03:58.042742    3758 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0816 10:03:58.056747    3758 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0816 10:03:58.163471    3758 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0816 10:03:58.281020    3758 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0816 10:03:58.281044    3758 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0816 10:03:58.294992    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:03:58.399122    3758 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0816 10:04:59.415916    3758 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.017935847s)
	I0816 10:04:59.415981    3758 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0816 10:04:59.453653    3758 out.go:201] 
	W0816 10:04:59.474040    3758 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 16 17:03:56 ha-286000-m03 systemd[1]: Starting Docker Application Container Engine...
	Aug 16 17:03:56 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:56.200976866Z" level=info msg="Starting up"
	Aug 16 17:03:56 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:56.201455258Z" level=info msg="containerd not running, starting managed containerd"
	Aug 16 17:03:56 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:56.201908035Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=519
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.218394236Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.233514110Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.233584256Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.233654513Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.233690141Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.233768300Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.233806284Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.233955562Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.233996222Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.234026670Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.234054929Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.234137090Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.234382642Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.235934793Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.235985918Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.236117022Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.236159609Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.236256962Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.236327154Z" level=info msg="metadata content store policy set" policy=shared
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.238422470Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.238509436Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.238555940Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.238594343Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.238633954Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.238734892Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.238944917Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239083528Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239123172Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239153894Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239184820Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239214617Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239248728Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239285992Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239333696Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239368624Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239411950Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239451554Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239490333Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239523121Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239554681Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239589296Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239621390Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239653930Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239683266Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239712579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239742056Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239772828Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239801901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239830636Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239859570Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239893189Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239929555Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239960582Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239991787Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240076099Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240121809Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240153088Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240182438Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240210918Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240240067Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240268586Z" level=info msg="NRI interface is disabled by configuration."
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240520719Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240609661Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240710485Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240795617Z" level=info msg="containerd successfully booted in 0.023088s"
	Aug 16 17:03:57 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:57.221812848Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 16 17:03:57 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:57.228854621Z" level=info msg="Loading containers: start."
	Aug 16 17:03:57 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:57.312023793Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 16 17:03:57 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:57.390459780Z" level=info msg="Loading containers: done."
	Aug 16 17:03:57 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:57.404746652Z" level=info msg="Docker daemon" commit=f9522e5 containerd-snapshotter=false storage-driver=overlay2 version=27.1.2
	Aug 16 17:03:57 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:57.404864935Z" level=info msg="Daemon has completed initialization"
	Aug 16 17:03:57 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:57.430646618Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 16 17:03:57 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:57.430810646Z" level=info msg="API listen on [::]:2376"
	Aug 16 17:03:57 ha-286000-m03 systemd[1]: Started Docker Application Container Engine.
	Aug 16 17:03:58 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:58.488220464Z" level=info msg="Processing signal 'terminated'"
	Aug 16 17:03:58 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:58.489144852Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 16 17:03:58 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:58.489473398Z" level=info msg="Daemon shutdown complete"
	Aug 16 17:03:58 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:58.489515121Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 16 17:03:58 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:58.489527809Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 16 17:03:58 ha-286000-m03 systemd[1]: Stopping Docker Application Container Engine...
	Aug 16 17:03:59 ha-286000-m03 systemd[1]: docker.service: Deactivated successfully.
	Aug 16 17:03:59 ha-286000-m03 systemd[1]: Stopped Docker Application Container Engine.
	Aug 16 17:03:59 ha-286000-m03 systemd[1]: Starting Docker Application Container Engine...
	Aug 16 17:03:59 ha-286000-m03 dockerd[913]: time="2024-08-16T17:03:59.520198872Z" level=info msg="Starting up"
	Aug 16 17:04:59 ha-286000-m03 dockerd[913]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 16 17:04:59 ha-286000-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 16 17:04:59 ha-286000-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 16 17:04:59 ha-286000-m03 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0816 10:04:59.474111    3758 out.go:270] * 
	W0816 10:04:59.474938    3758 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 10:04:59.558192    3758 out.go:201] 
	
	
	==> Docker <==
	Aug 16 17:05:04 ha-286000 cri-dockerd[1134]: time="2024-08-16T17:05:04Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Aug 16 17:05:04 ha-286000 dockerd[1241]: time="2024-08-16T17:05:04.522783748Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 16 17:05:04 ha-286000 dockerd[1241]: time="2024-08-16T17:05:04.522878436Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 16 17:05:04 ha-286000 dockerd[1241]: time="2024-08-16T17:05:04.522904596Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:05:04 ha-286000 dockerd[1241]: time="2024-08-16T17:05:04.523003751Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:18:06 ha-286000 dockerd[1235]: time="2024-08-16T17:18:06.658515858Z" level=info msg="ignoring event" container=cafa34c562392ad0f4839d505d8a5b0e77e1dad3770e1f2c6e5f587dacbaa856 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 16 17:18:06 ha-286000 dockerd[1241]: time="2024-08-16T17:18:06.659400251Z" level=info msg="shim disconnected" id=cafa34c562392ad0f4839d505d8a5b0e77e1dad3770e1f2c6e5f587dacbaa856 namespace=moby
	Aug 16 17:18:06 ha-286000 dockerd[1241]: time="2024-08-16T17:18:06.659461431Z" level=warning msg="cleaning up after shim disconnected" id=cafa34c562392ad0f4839d505d8a5b0e77e1dad3770e1f2c6e5f587dacbaa856 namespace=moby
	Aug 16 17:18:06 ha-286000 dockerd[1241]: time="2024-08-16T17:18:06.659470418Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 16 17:18:06 ha-286000 dockerd[1241]: time="2024-08-16T17:18:06.825334448Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 16 17:18:06 ha-286000 dockerd[1241]: time="2024-08-16T17:18:06.825403496Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 16 17:18:06 ha-286000 dockerd[1241]: time="2024-08-16T17:18:06.825417265Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:18:06 ha-286000 dockerd[1241]: time="2024-08-16T17:18:06.825488106Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:18:07 ha-286000 dockerd[1235]: time="2024-08-16T17:18:07.380914614Z" level=info msg="ignoring event" container=f55b59f53c6eb976c8fd19fc0412bef109f9fc2505622d0a8ec85ff7a5968741 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 16 17:18:07 ha-286000 dockerd[1241]: time="2024-08-16T17:18:07.381086772Z" level=info msg="shim disconnected" id=f55b59f53c6eb976c8fd19fc0412bef109f9fc2505622d0a8ec85ff7a5968741 namespace=moby
	Aug 16 17:18:07 ha-286000 dockerd[1241]: time="2024-08-16T17:18:07.381165916Z" level=warning msg="cleaning up after shim disconnected" id=f55b59f53c6eb976c8fd19fc0412bef109f9fc2505622d0a8ec85ff7a5968741 namespace=moby
	Aug 16 17:18:07 ha-286000 dockerd[1241]: time="2024-08-16T17:18:07.381174899Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 16 17:18:07 ha-286000 dockerd[1241]: time="2024-08-16T17:18:07.835148340Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 16 17:18:07 ha-286000 dockerd[1241]: time="2024-08-16T17:18:07.835268034Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 16 17:18:07 ha-286000 dockerd[1241]: time="2024-08-16T17:18:07.835304228Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:18:07 ha-286000 dockerd[1241]: time="2024-08-16T17:18:07.835440471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:18:38 ha-286000 dockerd[1235]: time="2024-08-16T17:18:38.182671263Z" level=info msg="ignoring event" container=078fa65ce0cbbaee7bfe7a37ccce2f4babca06b52980d2e1de0c1e09137a15d3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 16 17:18:38 ha-286000 dockerd[1241]: time="2024-08-16T17:18:38.183075145Z" level=info msg="shim disconnected" id=078fa65ce0cbbaee7bfe7a37ccce2f4babca06b52980d2e1de0c1e09137a15d3 namespace=moby
	Aug 16 17:18:38 ha-286000 dockerd[1241]: time="2024-08-16T17:18:38.183173282Z" level=warning msg="cleaning up after shim disconnected" id=078fa65ce0cbbaee7bfe7a37ccce2f4babca06b52980d2e1de0c1e09137a15d3 namespace=moby
	Aug 16 17:18:38 ha-286000 dockerd[1241]: time="2024-08-16T17:18:38.183184275Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	078fa65ce0cbb       6e38f40d628db                                                                                         32 seconds ago      Exited              storage-provisioner       1                   482990a4b00e6       storage-provisioner
	a5da1871a366d       38af8ddebf499                                                                                         33 seconds ago      Running             kube-vip                  1                   69fba128b04a6       kube-vip-ha-286000
	5bbe7a8cc492e       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   13 minutes ago      Running             busybox                   0                   1873ade92edb9       busybox-7dff88458-dvmvk
	bcd7170b050a5       cbb01a7bd410d                                                                                         15 minutes ago      Running             coredns                   0                   452942e267927       coredns-6f6b679f8f-rfbz7
	60d3d03e297ce       cbb01a7bd410d                                                                                         15 minutes ago      Running             coredns                   0                   fbd84fb813c90       coredns-6f6b679f8f-2kqjf
	2677394dcd66f       kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166              16 minutes ago      Running             kindnet-cni               0                   afaf657e85c32       kindnet-whqxb
	81f6c96d46494       ad83b2ca7b09e                                                                                         16 minutes ago      Running             kube-proxy                0                   dfb822fc27b5e       kube-proxy-w4nt2
	cafa34c562392       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     16 minutes ago      Exited              kube-vip                  0                   69fba128b04a6       kube-vip-ha-286000
	8f5867ee99d9b       045733566833c                                                                                         16 minutes ago      Running             kube-controller-manager   0                   e87fd3e77384f       kube-controller-manager-ha-286000
	f7b2e9efdd94f       1766f54c897f0                                                                                         16 minutes ago      Running             kube-scheduler            0                   8e2b186980aff       kube-scheduler-ha-286000
	d8dadff6cec78       2e96e5913fc06                                                                                         16 minutes ago      Running             etcd                      0                   cdb14ff7d8896       etcd-ha-286000
	5f3adc202a7a2       604f5db92eaa8                                                                                         16 minutes ago      Running             kube-apiserver            0                   818ee6dafe6c9       kube-apiserver-ha-286000
	
	
	==> coredns [60d3d03e297c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:37081 - 62351 "HINFO IN 7437422972060865489.3041931607585121070. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.009759141s
	[INFO] 10.244.0.4:34542 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.00094119s
	[INFO] 10.244.0.4:38912 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 0.000921826s
	[INFO] 10.244.1.2:39585 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000070042s
	[INFO] 10.244.0.4:39673 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000101817s
	[INFO] 10.244.0.4:55820 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000225412s
	[INFO] 10.244.1.2:48427 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135793s
	[INFO] 10.244.1.2:33204 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.000791531s
	[INFO] 10.244.1.2:51238 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000081236s
	[INFO] 10.244.1.2:42705 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000135743s
	[INFO] 10.244.1.2:33254 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000106295s
	[INFO] 10.244.0.4:53900 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000067988s
	[INFO] 10.244.1.2:48994 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00006771s
	[INFO] 10.244.1.2:56734 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000138883s
	[INFO] 10.244.0.4:53039 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000056881s
	[INFO] 10.244.0.4:47474 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000052458s
	[INFO] 10.244.1.2:35027 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000113257s
	[INFO] 10.244.1.2:60680 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000087182s
	[INFO] 10.244.1.2:36287 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000080566s
	
	
	==> coredns [bcd7170b050a] <==
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:47813 - 35687 "HINFO IN 8431179596625010309.1478595720938230641. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.009077429s
	[INFO] 10.244.0.4:47786 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132847s
	[INFO] 10.244.0.4:40096 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.001354873s
	[INFO] 10.244.1.2:40884 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000096214s
	[INFO] 10.244.1.2:55655 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000050276s
	[INFO] 10.244.1.2:47690 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000614458s
	[INFO] 10.244.0.4:45344 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000076791s
	[INFO] 10.244.0.4:53101 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.001042655s
	[INFO] 10.244.0.4:43889 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000082483s
	[INFO] 10.244.0.4:59210 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.00074308s
	[INFO] 10.244.0.4:38429 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00010233s
	[INFO] 10.244.0.4:48679 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00007214s
	[INFO] 10.244.1.2:43879 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000062278s
	[INFO] 10.244.1.2:45902 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000100209s
	[INFO] 10.244.1.2:39740 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000071593s
	[INFO] 10.244.0.4:50878 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000077864s
	[INFO] 10.244.0.4:51260 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000078903s
	[INFO] 10.244.0.4:38206 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000049117s
	[INFO] 10.244.1.2:54952 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000071797s
	[INFO] 10.244.1.2:36478 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000063757s
	[INFO] 10.244.0.4:43240 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000066616s
	[INFO] 10.244.0.4:60894 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000052873s
	[INFO] 10.244.1.2:43932 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00008816s
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Error from server: etcdserver: request timed out
	
	
	==> dmesg <==
	[  +2.760553] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +1.351722] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000003] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Aug16 17:02] systemd-fstab-generator[519]: Ignoring "noauto" option for root device
	[  +0.114039] systemd-fstab-generator[531]: Ignoring "noauto" option for root device
	[  +1.207693] kauditd_printk_skb: 42 callbacks suppressed
	[  +0.611653] systemd-fstab-generator[786]: Ignoring "noauto" option for root device
	[  +0.265175] systemd-fstab-generator[846]: Ignoring "noauto" option for root device
	[  +0.101006] systemd-fstab-generator[858]: Ignoring "noauto" option for root device
	[  +0.123308] systemd-fstab-generator[872]: Ignoring "noauto" option for root device
	[  +2.497195] systemd-fstab-generator[1086]: Ignoring "noauto" option for root device
	[  +0.110299] systemd-fstab-generator[1098]: Ignoring "noauto" option for root device
	[  +0.095670] systemd-fstab-generator[1110]: Ignoring "noauto" option for root device
	[  +0.122299] systemd-fstab-generator[1126]: Ignoring "noauto" option for root device
	[  +3.636333] systemd-fstab-generator[1227]: Ignoring "noauto" option for root device
	[  +0.056190] kauditd_printk_skb: 233 callbacks suppressed
	[  +2.505796] systemd-fstab-generator[1479]: Ignoring "noauto" option for root device
	[  +3.634449] systemd-fstab-generator[1611]: Ignoring "noauto" option for root device
	[  +0.055756] kauditd_printk_skb: 70 callbacks suppressed
	[  +6.910973] systemd-fstab-generator[2107]: Ignoring "noauto" option for root device
	[  +0.079351] kauditd_printk_skb: 72 callbacks suppressed
	[  +9.264773] kauditd_printk_skb: 51 callbacks suppressed
	[Aug16 17:03] kauditd_printk_skb: 26 callbacks suppressed
	[Aug16 17:18] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [d8dadff6cec7] <==
	{"level":"warn","ts":"2024-08-16T17:18:52.653886Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T17:18:39.654876Z","time spent":"12.998976096s","remote":"127.0.0.1:39116","response type":"/etcdserverpb.KV/Range","request count":0,"request size":53,"response count":0,"response size":0,"request content":"key:\"/registry/statefulsets/\" range_end:\"/registry/statefulsets0\" limit:10000 "}
	{"level":"warn","ts":"2024-08-16T17:18:52.653995Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T17:18:39.654835Z","time spent":"12.999153392s","remote":"127.0.0.1:38932","response type":"/etcdserverpb.KV/Range","request count":0,"request size":81,"response count":0,"response size":0,"request content":"key:\"/registry/certificatesigningrequests/\" range_end:\"/registry/certificatesigningrequests0\" limit:10000 "}
	{"level":"warn","ts":"2024-08-16T17:18:52.654054Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T17:18:39.654795Z","time spent":"12.999250954s","remote":"127.0.0.1:38842","response type":"/etcdserverpb.KV/Range","request count":0,"request size":65,"response count":0,"response size":0,"request content":"key:\"/registry/services/endpoints/\" range_end:\"/registry/services/endpoints0\" limit:10000 "}
	{"level":"warn","ts":"2024-08-16T17:18:52.654131Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T17:18:39.654781Z","time spent":"12.999343584s","remote":"127.0.0.1:39058","response type":"/etcdserverpb.KV/Range","request count":0,"request size":45,"response count":0,"response size":0,"request content":"key:\"/registry/csinodes/\" range_end:\"/registry/csinodes0\" limit:10000 "}
	{"level":"warn","ts":"2024-08-16T17:18:52.654241Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T17:18:39.653660Z","time spent":"13.000574376s","remote":"127.0.0.1:38908","response type":"/etcdserverpb.KV/Range","request count":0,"request size":77,"response count":0,"response size":0,"request content":"key:\"/registry/horizontalpodautoscalers/\" range_end:\"/registry/horizontalpodautoscalers0\" limit:10000 "}
	{"level":"warn","ts":"2024-08-16T17:18:52.654432Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T17:18:39.653628Z","time spent":"13.000796969s","remote":"127.0.0.1:38970","response type":"/etcdserverpb.KV/Range","request count":0,"request size":43,"response count":0,"response size":0,"request content":"key:\"/registry/ingress/\" range_end:\"/registry/ingress0\" limit:10000 "}
	{"level":"warn","ts":"2024-08-16T17:18:52.654442Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T17:18:39.653638Z","time spent":"13.000802062s","remote":"127.0.0.1:38862","response type":"/etcdserverpb.KV/Range","request count":0,"request size":37,"response count":0,"response size":0,"request content":"key:\"/registry/pods/\" range_end:\"/registry/pods0\" limit:10000 "}
	{"level":"warn","ts":"2024-08-16T17:18:52.654451Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T17:18:39.653617Z","time spent":"13.000832203s","remote":"127.0.0.1:38754","response type":"/etcdserverpb.KV/Range","request count":0,"request size":57,"response count":0,"response size":0,"request content":"key:\"/registry/resourcequotas/\" range_end:\"/registry/resourcequotas0\" limit:10000 "}
	{"level":"warn","ts":"2024-08-16T17:18:52.654579Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T17:18:39.653605Z","time spent":"13.000966399s","remote":"127.0.0.1:39158","response type":"/etcdserverpb.KV/Range","request count":0,"request size":87,"response count":0,"response size":0,"request content":"key:\"/registry/mutatingwebhookconfigurations/\" range_end:\"/registry/mutatingwebhookconfigurations0\" limit:10000 "}
	{"level":"warn","ts":"2024-08-16T17:18:52.654593Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T17:18:39.652230Z","time spent":"13.002358445s","remote":"127.0.0.1:38668","response type":"/etcdserverpb.KV/Range","request count":0,"request size":121,"response count":0,"response size":0,"request content":"key:\"/registry/apiextensions.k8s.io/customresourcedefinitions/\" range_end:\"/registry/apiextensions.k8s.io/customresourcedefinitions0\" limit:10000 "}
	{"level":"warn","ts":"2024-08-16T17:18:52.654700Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T17:18:39.652190Z","time spent":"13.002503619s","remote":"127.0.0.1:39030","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":0,"response size":0,"request content":"key:\"/registry/clusterrolebindings/\" range_end:\"/registry/clusterrolebindings0\" limit:10000 "}
	{"level":"warn","ts":"2024-08-16T17:18:52.654787Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T17:18:39.650594Z","time spent":"13.004186902s","remote":"127.0.0.1:38956","response type":"/etcdserverpb.KV/Range","request count":0,"request size":59,"response count":0,"response size":0,"request content":"key:\"/registry/networkpolicies/\" range_end:\"/registry/networkpolicies0\" limit:10000 "}
	{"level":"warn","ts":"2024-08-16T17:18:52.654887Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T17:18:39.650578Z","time spent":"13.004302844s","remote":"127.0.0.1:38786","response type":"/etcdserverpb.KV/Range","request count":0,"request size":49,"response count":0,"response size":0,"request content":"key:\"/registry/namespaces/\" range_end:\"/registry/namespaces0\" limit:10000 "}
	{"level":"warn","ts":"2024-08-16T17:18:52.655044Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T17:18:39.650559Z","time spent":"13.004450099s","remote":"127.0.0.1:39086","response type":"/etcdserverpb.KV/Range","request count":0,"request size":69,"response count":0,"response size":0,"request content":"key:\"/registry/csistoragecapacities/\" range_end:\"/registry/csistoragecapacities0\" limit:10000 "}
	{"level":"warn","ts":"2024-08-16T17:18:52.655161Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T17:18:39.647317Z","time spent":"13.00783749s","remote":"127.0.0.1:39118","response type":"/etcdserverpb.KV/Range","request count":0,"request size":49,"response count":0,"response size":0,"request content":"key:\"/registry/daemonsets/\" range_end:\"/registry/daemonsets0\" limit:10000 "}
	{"level":"warn","ts":"2024-08-16T17:18:52.655236Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T17:18:39.647276Z","time spent":"13.007953297s","remote":"127.0.0.1:38852","response type":"/etcdserverpb.KV/Range","request count":0,"request size":43,"response count":0,"response size":0,"request content":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" limit:10000 "}
	{"level":"warn","ts":"2024-08-16T17:18:52.655263Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T17:18:39.647265Z","time spent":"13.007994367s","remote":"127.0.0.1:38996","response type":"/etcdserverpb.KV/Range","request count":0,"request size":39,"response count":0,"response size":0,"request content":"key:\"/registry/roles/\" range_end:\"/registry/roles0\" limit:10000 "}
	{"level":"warn","ts":"2024-08-16T17:18:52.655377Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T17:18:39.647253Z","time spent":"13.0081103s","remote":"127.0.0.1:39148","response type":"/etcdserverpb.KV/Range","request count":0,"request size":91,"response count":0,"response size":0,"request content":"key:\"/registry/validatingwebhookconfigurations/\" range_end:\"/registry/validatingwebhookconfigurations0\" limit:10000 "}
	{"level":"warn","ts":"2024-08-16T17:18:52.655481Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T17:18:39.647235Z","time spent":"13.008238483s","remote":"127.0.0.1:38770","response type":"/etcdserverpb.KV/Range","request count":0,"request size":43,"response count":0,"response size":0,"request content":"key:\"/registry/secrets/\" range_end:\"/registry/secrets0\" limit:10000 "}
	{"level":"warn","ts":"2024-08-16T17:18:52.655549Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T17:18:39.645805Z","time spent":"13.009737266s","remote":"127.0.0.1:39092","response type":"/etcdserverpb.KV/Range","request count":0,"request size":83,"response count":0,"response size":0,"request content":"key:\"/registry/prioritylevelconfigurations/\" range_end:\"/registry/prioritylevelconfigurations0\" limit:10000 "}
	{"level":"warn","ts":"2024-08-16T17:18:52.655643Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T17:18:39.645759Z","time spent":"13.009877264s","remote":"127.0.0.1:38938","response type":"/etcdserverpb.KV/Range","request count":0,"request size":41,"response count":0,"response size":0,"request content":"key:\"/registry/leases/\" range_end:\"/registry/leases0\" limit:10000 "}
	{"level":"warn","ts":"2024-08-16T17:18:52.655746Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T17:18:38.785642Z","time spent":"13.870098003s","remote":"127.0.0.1:38852","response type":"/etcdserverpb.KV/Range","request count":0,"request size":42,"response count":0,"response size":0,"request content":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" count_only:true "}
	{"level":"warn","ts":"2024-08-16T17:18:52.655837Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T17:18:38.657637Z","time spent":"13.998194926s","remote":"127.0.0.1:38660","response type":"/etcdserverpb.KV/Range","request count":0,"request size":36,"response count":0,"response size":0,"request content":"key:\"/registry/masterleases/192.169.0.5\" "}
	{"level":"error","ts":"2024-08-16T17:18:52.656000Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[-]linearizable_read failed: etcdserver: request timed out\n[+]data_corruption ok\n[+]serializable_read ok\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	{"level":"warn","ts":"2024-08-16T17:18:52.656028Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T17:18:39.657825Z","time spent":"12.998195071s","remote":"127.0.0.1:39190","response type":"/etcdserverpb.KV/Range","request count":0,"request size":97,"response count":0,"response size":0,"request content":"key:\"/registry/apiregistration.k8s.io/apiservices/\" range_end:\"/registry/apiregistration.k8s.io/apiservices0\" limit:10000 "}
	
	
	==> kernel <==
	 17:18:52 up 17 min,  0 users,  load average: 1.88, 0.77, 0.35
	Linux ha-286000 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [2677394dcd66] <==
	I0816 17:18:05.224985       1 main.go:322] Node ha-286000-m04 has CIDR [10.244.2.0/24] 
	I0816 17:18:15.231618       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0816 17:18:15.231696       1 main.go:299] handling current node
	I0816 17:18:15.231774       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0816 17:18:15.231864       1 main.go:322] Node ha-286000-m02 has CIDR [10.244.1.0/24] 
	I0816 17:18:15.232011       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0816 17:18:15.232390       1 main.go:322] Node ha-286000-m04 has CIDR [10.244.2.0/24] 
	I0816 17:18:25.231687       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0816 17:18:25.231799       1 main.go:322] Node ha-286000-m04 has CIDR [10.244.2.0/24] 
	I0816 17:18:25.231955       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0816 17:18:25.232043       1 main.go:299] handling current node
	I0816 17:18:25.232094       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0816 17:18:25.232177       1 main.go:322] Node ha-286000-m02 has CIDR [10.244.1.0/24] 
	I0816 17:18:35.223255       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0816 17:18:35.223294       1 main.go:322] Node ha-286000-m02 has CIDR [10.244.1.0/24] 
	I0816 17:18:35.223466       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0816 17:18:35.223536       1 main.go:322] Node ha-286000-m04 has CIDR [10.244.2.0/24] 
	I0816 17:18:35.223727       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0816 17:18:35.223790       1 main.go:299] handling current node
	I0816 17:18:45.223573       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0816 17:18:45.223687       1 main.go:299] handling current node
	I0816 17:18:45.223745       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0816 17:18:45.224065       1 main.go:322] Node ha-286000-m02 has CIDR [10.244.1.0/24] 
	I0816 17:18:45.224477       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0816 17:18:45.224606       1 main.go:322] Node ha-286000-m04 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [5f3adc202a7a] <==
	E0816 17:18:52.663731       1 cacher.go:478] cacher (validatingwebhookconfigurations.admissionregistration.k8s.io): unexpected ListAndWatch error: failed to list *admissionregistration.ValidatingWebhookConfiguration: etcdserver: request timed out; reinitializing...
	W0816 17:18:52.663817       1 reflector.go:561] storage/cacher.go:/roles: failed to list *rbac.Role: etcdserver: request timed out
	E0816 17:18:52.663920       1 cacher.go:478] cacher (roles.rbac.authorization.k8s.io): unexpected ListAndWatch error: failed to list *rbac.Role: etcdserver: request timed out; reinitializing...
	W0816 17:18:52.664048       1 reflector.go:561] storage/cacher.go:/minions: failed to list *core.Node: etcdserver: request timed out
	E0816 17:18:52.664155       1 cacher.go:478] cacher (nodes): unexpected ListAndWatch error: failed to list *core.Node: etcdserver: request timed out; reinitializing...
	W0816 17:18:52.664315       1 reflector.go:561] storage/cacher.go:/daemonsets: failed to list *apps.DaemonSet: etcdserver: request timed out
	E0816 17:18:52.664438       1 cacher.go:478] cacher (daemonsets.apps): unexpected ListAndWatch error: failed to list *apps.DaemonSet: etcdserver: request timed out; reinitializing...
	W0816 17:18:52.664502       1 reflector.go:561] storage/cacher.go:/csinodes: failed to list *storage.CSINode: etcdserver: request timed out
	E0816 17:18:52.664605       1 cacher.go:478] cacher (csinodes.storage.k8s.io): unexpected ListAndWatch error: failed to list *storage.CSINode: etcdserver: request timed out; reinitializing...
	W0816 17:18:52.664641       1 reflector.go:561] storage/cacher.go:/services/endpoints: failed to list *core.Endpoints: etcdserver: request timed out
	E0816 17:18:52.664717       1 cacher.go:478] cacher (endpoints): unexpected ListAndWatch error: failed to list *core.Endpoints: etcdserver: request timed out; reinitializing...
	W0816 17:18:52.664839       1 reflector.go:561] storage/cacher.go:/csistoragecapacities: failed to list *storage.CSIStorageCapacity: etcdserver: request timed out
	E0816 17:18:52.664889       1 cacher.go:478] cacher (csistoragecapacities.storage.k8s.io): unexpected ListAndWatch error: failed to list *storage.CSIStorageCapacity: etcdserver: request timed out; reinitializing...
	W0816 17:18:52.664982       1 reflector.go:561] storage/cacher.go:/certificatesigningrequests: failed to list *certificates.CertificateSigningRequest: etcdserver: request timed out
	E0816 17:18:52.665053       1 cacher.go:478] cacher (certificatesigningrequests.certificates.k8s.io): unexpected ListAndWatch error: failed to list *certificates.CertificateSigningRequest: etcdserver: request timed out; reinitializing...
	W0816 17:18:52.665191       1 reflector.go:561] storage/cacher.go:/statefulsets: failed to list *apps.StatefulSet: etcdserver: request timed out
	E0816 17:18:52.665337       1 cacher.go:478] cacher (statefulsets.apps): unexpected ListAndWatch error: failed to list *apps.StatefulSet: etcdserver: request timed out; reinitializing...
	W0816 17:18:52.665459       1 reflector.go:561] storage/cacher.go:/namespaces: failed to list *core.Namespace: etcdserver: request timed out
	E0816 17:18:52.665592       1 cacher.go:478] cacher (namespaces): unexpected ListAndWatch error: failed to list *core.Namespace: etcdserver: request timed out; reinitializing...
	W0816 17:18:52.666266       1 reflector.go:561] storage/cacher.go:/validatingadmissionpolicies: failed to list *admissionregistration.ValidatingAdmissionPolicy: etcdserver: request timed out
	E0816 17:18:52.666523       1 cacher.go:478] cacher (validatingadmissionpolicies.admissionregistration.k8s.io): unexpected ListAndWatch error: failed to list *admissionregistration.ValidatingAdmissionPolicy: etcdserver: request timed out; reinitializing...
	W0816 17:18:52.666694       1 reflector.go:561] storage/cacher.go:/endpointslices: failed to list *discovery.EndpointSlice: etcdserver: request timed out
	E0816 17:18:52.666747       1 cacher.go:478] cacher (endpointslices.discovery.k8s.io): unexpected ListAndWatch error: failed to list *discovery.EndpointSlice: etcdserver: request timed out; reinitializing...
	W0816 17:18:52.666869       1 reflector.go:561] storage/cacher.go:/ingressclasses: failed to list *networking.IngressClass: etcdserver: request timed out
	E0816 17:18:52.667014       1 cacher.go:478] cacher (ingressclasses.networking.k8s.io): unexpected ListAndWatch error: failed to list *networking.IngressClass: etcdserver: request timed out; reinitializing...
	
	
	==> kube-controller-manager [8f5867ee99d9] <==
	I0816 17:17:21.970272       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000-m04"
	I0816 17:17:21.970425       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000-m04"
	I0816 17:17:22.057762       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="38.461µs"
	I0816 17:17:22.062866       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000-m04"
	I0816 17:17:22.339566       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000-m04"
	I0816 17:17:23.372821       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000-m04"
	I0816 17:17:24.020984       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000-m04"
	I0816 17:17:24.021820       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-286000-m04"
	I0816 17:17:24.091142       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000-m04"
	I0816 17:17:32.146156       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000-m04"
	I0816 17:17:44.596616       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000-m04"
	I0816 17:17:44.599585       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-286000-m04"
	I0816 17:17:44.604245       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000-m04"
	I0816 17:17:44.612723       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="57.66µs"
	I0816 17:17:44.622735       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="33.854µs"
	I0816 17:17:44.628514       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="41.171µs"
	I0816 17:17:46.710170       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="4.809826ms"
	I0816 17:17:46.710765       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="32.942µs"
	I0816 17:17:48.294811       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000-m04"
	I0816 17:17:52.608525       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000-m04"
	I0816 17:18:34.248590       1 request.go:700] Waited for 1.03097685s, retries: 2, retry-after: 1s - retry-reason: due to server-side throttling, FlowSchema UID: "5b4db95b-b438-49a8-bc09-32b1bd5dffd2" - request: GET:https://192.169.0.5:8443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=2678&timeout=9m30s&timeoutSeconds=570&watch=true
	E0816 17:18:36.043785       1 node_lifecycle_controller.go:978] "Error updating node" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" logger="node-lifecycle-controller" node="ha-286000-m02"
	I0816 17:18:44.293544       1 request.go:700] Waited for 1.298741375s, retries: 6, retry-after: 1s - retry-reason: due to server-side throttling, FlowSchema UID: "5b4db95b-b438-49a8-bc09-32b1bd5dffd2" - request: GET:https://192.169.0.5:8443/apis/networking.k8s.io/v1/ingressclasses?allowWatchBookmarks=true&resourceVersion=2622&timeout=5m46s&timeoutSeconds=346&watch=true
	E0816 17:18:45.639843       1 node_lifecycle_controller.go:720] "Failed while getting a Node to retry updating node health. Probably Node was deleted" logger="node-lifecycle-controller" node="ha-286000-m02"
	E0816 17:18:45.641333       1 node_lifecycle_controller.go:725] "Update health of Node from Controller error, Skipping - no pods will be evicted" err="etcdserver: request timed out" logger="node-lifecycle-controller" node=""
	
	
	==> kube-proxy [81f6c96d4649] <==
	E0816 17:02:30.223011       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0816 17:02:30.267797       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0816 17:02:30.267825       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0816 17:02:30.267843       1 server_linux.go:169] "Using iptables Proxier"
	I0816 17:02:30.270154       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0816 17:02:30.270311       1 server.go:483] "Version info" version="v1.31.0"
	I0816 17:02:30.270319       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 17:02:30.271352       1 config.go:197] "Starting service config controller"
	I0816 17:02:30.271358       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0816 17:02:30.271372       1 config.go:104] "Starting endpoint slice config controller"
	I0816 17:02:30.271375       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0816 17:02:30.271598       1 config.go:326] "Starting node config controller"
	I0816 17:02:30.271603       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0816 17:02:30.371824       1 shared_informer.go:320] Caches are synced for node config
	I0816 17:02:30.371859       1 shared_informer.go:320] Caches are synced for service config
	I0816 17:02:30.371867       1 shared_informer.go:320] Caches are synced for endpoint slice config
	W0816 17:18:49.931699       1 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.EndpointSlice ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0816 17:18:49.931724       1 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0816 17:18:49.931757       1 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0816 17:18:51.554889       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-286000&resourceVersion=2600": dial tcp 192.169.0.254:8443: connect: no route to host
	W0816 17:18:51.555065       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2592": dial tcp 192.169.0.254:8443: connect: no route to host
	E0816 17:18:51.559015       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2592\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	E0816 17:18:51.559027       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-286000&resourceVersion=2600\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 17:18:51.558115       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2678": dial tcp 192.169.0.254:8443: connect: no route to host
	E0816 17:18:51.559078       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2678\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-scheduler [f7b2e9efdd94] <==
	W0816 17:02:22.176847       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0816 17:02:22.176852       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 17:02:22.176967       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0816 17:02:22.176999       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0816 17:02:22.177013       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0816 17:02:22.179079       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0816 17:02:22.179253       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0816 17:02:22.179322       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0816 17:02:23.072963       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0816 17:02:23.073256       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 17:02:23.081000       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0816 17:02:23.081176       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 17:02:23.142220       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0816 17:02:23.142263       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 17:02:23.166962       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0816 17:02:23.167003       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 17:02:23.255186       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 17:02:23.255230       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 17:02:23.456273       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0816 17:02:23.456447       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0816 17:02:26.460413       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0816 17:05:02.827326       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-k9m92\": pod busybox-7dff88458-k9m92 is already assigned to node \"ha-286000-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-k9m92" node="ha-286000-m02"
	E0816 17:05:02.827387       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod a516859c-0da8-4ba9-a896-ac720495818f(default/busybox-7dff88458-k9m92) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-k9m92"
	E0816 17:05:02.827403       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-k9m92\": pod busybox-7dff88458-k9m92 is already assigned to node \"ha-286000-m02\"" pod="default/busybox-7dff88458-k9m92"
	I0816 17:05:02.827520       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-k9m92" node="ha-286000-m02"
	
	
	==> kubelet <==
	Aug 16 17:18:48 ha-286000 kubelet[2114]: W0816 17:18:48.871710    2114 reflector.go:484] object-"kube-system"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Aug 16 17:18:48 ha-286000 kubelet[2114]: W0816 17:18:48.871813    2114 reflector.go:484] object-"default"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Aug 16 17:18:48 ha-286000 kubelet[2114]: W0816 17:18:48.871830    2114 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Aug 16 17:18:51 ha-286000 kubelet[2114]: W0816 17:18:51.550298    2114 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=2592": dial tcp 192.169.0.254:8443: connect: no route to host
	Aug 16 17:18:51 ha-286000 kubelet[2114]: W0816 17:18:51.550390    2114 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=2678": dial tcp 192.169.0.254:8443: connect: no route to host
	Aug 16 17:18:51 ha-286000 kubelet[2114]: E0816 17:18:51.552267    2114 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=2592\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	Aug 16 17:18:51 ha-286000 kubelet[2114]: E0816 17:18:51.552281    2114 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=2678\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	Aug 16 17:18:51 ha-286000 kubelet[2114]: W0816 17:18:51.552361    2114 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-proxy&resourceVersion=2592": dial tcp 192.169.0.254:8443: connect: no route to host
	Aug 16 17:18:51 ha-286000 kubelet[2114]: E0816 17:18:51.552386    2114 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-proxy&resourceVersion=2592\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	Aug 16 17:18:51 ha-286000 kubelet[2114]: W0816 17:18:51.552427    2114 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=2477": dial tcp 192.169.0.254:8443: connect: no route to host
	Aug 16 17:18:51 ha-286000 kubelet[2114]: W0816 17:18:51.552446    2114 reflector.go:561] pkg/kubelet/config/apiserver.go:66: failed to list *v1.Pod: Get "https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dha-286000&resourceVersion=2591": dial tcp 192.169.0.254:8443: connect: no route to host
	Aug 16 17:18:51 ha-286000 kubelet[2114]: E0816 17:18:51.552451    2114 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=2477\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	Aug 16 17:18:51 ha-286000 kubelet[2114]: E0816 17:18:51.552519    2114 reflector.go:158] "Unhandled Error" err="pkg/kubelet/config/apiserver.go:66: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dha-286000&resourceVersion=2591\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	Aug 16 17:18:51 ha-286000 kubelet[2114]: W0816 17:18:51.552646    2114 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=2576": dial tcp 192.169.0.254:8443: connect: no route to host
	Aug 16 17:18:51 ha-286000 kubelet[2114]: E0816 17:18:51.552668    2114 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=2576\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	Aug 16 17:18:51 ha-286000 kubelet[2114]: E0816 17:18:51.552627    2114 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events\": dial tcp 192.169.0.254:8443: connect: no route to host" event="&Event{ObjectMeta:{kube-apiserver-ha-286000.17ec450d53f6e64c  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ha-286000,UID:54fd9c91db8add4ea97d383d73f94dbe,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ha-286000,},FirstTimestamp:2024-08-16 17:18:00.921638476 +0000 UTC m=+935.425081470,LastTimestamp:2024-08-16 17:18:00.921638476 +0000 UTC m=+935.425081470,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related
:nil,ReportingController:kubelet,ReportingInstance:ha-286000,}"
	Aug 16 17:18:51 ha-286000 kubelet[2114]: I0816 17:18:51.552748    2114 status_manager.go:851] "Failed to get status for pod" podUID="4805d53b-2db3-4092-a3f2-d4a854e93adc" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.169.0.254:8443: connect: no route to host"
	Aug 16 17:18:51 ha-286000 kubelet[2114]: W0816 17:18:51.552770    2114 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dcoredns&resourceVersion=2662": dial tcp 192.169.0.254:8443: connect: no route to host
	Aug 16 17:18:51 ha-286000 kubelet[2114]: E0816 17:18:51.552792    2114 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dcoredns&resourceVersion=2662\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	Aug 16 17:18:51 ha-286000 kubelet[2114]: W0816 17:18:51.552849    2114 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-286000&resourceVersion=2572": dial tcp 192.169.0.254:8443: connect: no route to host
	Aug 16 17:18:51 ha-286000 kubelet[2114]: E0816 17:18:51.552915    2114 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-286000&resourceVersion=2572\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	Aug 16 17:18:51 ha-286000 kubelet[2114]: W0816 17:18:51.553313    2114 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=2488": dial tcp 192.169.0.254:8443: connect: no route to host
	Aug 16 17:18:51 ha-286000 kubelet[2114]: E0816 17:18:51.553367    2114 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=2488\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	Aug 16 17:18:51 ha-286000 kubelet[2114]: E0816 17:18:51.553410    2114 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-286000?timeout=10s\": dial tcp 192.169.0.254:8443: connect: no route to host" interval="200ms"
	Aug 16 17:18:51 ha-286000 kubelet[2114]: I0816 17:18:51.631685    2114 scope.go:117] "RemoveContainer" containerID="078fa65ce0cbbaee7bfe7a37ccce2f4babca06b52980d2e1de0c1e09137a15d3"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-286000 -n ha-286000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-286000 -n ha-286000: exit status 2 (16.220891147s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "ha-286000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (74.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (64.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-darwin-amd64 profile list --output json: (17.139014905s)
ha_test.go:413: expected profile "ha-286000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-286000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-286000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"hyperkit\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACoun
t\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"ha-286000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.169.0.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.169.0.5\",\"Port\":8443,\"Ku
bernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.169.0.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.169.0.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.169.0.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false
,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\
",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-286000 -n ha-286000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ha-286000 -n ha-286000: exit status 2 (16.648626158s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-286000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-286000 logs -n 25: (12.949673407s)
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| kubectl | -p ha-286000 -- get pods -o          | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:15 PDT | 16 Aug 24 10:15 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- get pods -o          | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:15 PDT | 16 Aug 24 10:15 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- get pods -o          | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:15 PDT | 16 Aug 24 10:15 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- get pods -o          | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:15 PDT | 16 Aug 24 10:15 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- get pods -o          | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:15 PDT | 16 Aug 24 10:15 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- get pods -o          | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- get pods -o          | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- get pods -o          | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT |                     |
	|         | busybox-7dff88458-99xmp --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-dvmvk --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-k9m92 --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT |                     |
	|         | busybox-7dff88458-99xmp --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-dvmvk --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-k9m92 --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT |                     |
	|         | busybox-7dff88458-99xmp -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-dvmvk -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-k9m92 -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- get pods -o          | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT |                     |
	|         | busybox-7dff88458-99xmp              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-dvmvk              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-dvmvk -- sh        |           |         |         |                     |                     |
	|         | -c ping -c 1 192.169.0.1             |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-k9m92              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-k9m92 -- sh        |           |         |         |                     |                     |
	|         | -c ping -c 1 192.169.0.1             |           |         |         |                     |                     |
	| node    | add -p ha-286000 -v=7                | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:17 PDT | 16 Aug 24 10:17 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | ha-286000 node stop m02 -v=7         | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:17 PDT | 16 Aug 24 10:18 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 10:01:48
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 10:01:48.113296    3758 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:01:48.113462    3758 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:01:48.113467    3758 out.go:358] Setting ErrFile to fd 2...
	I0816 10:01:48.113470    3758 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:01:48.113648    3758 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1276/.minikube/bin
	I0816 10:01:48.115192    3758 out.go:352] Setting JSON to false
	I0816 10:01:48.140204    3758 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":1878,"bootTime":1723825830,"procs":433,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0816 10:01:48.140316    3758 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 10:01:48.194498    3758 out.go:177] * [ha-286000] minikube v1.33.1 on Darwin 14.6.1
	I0816 10:01:48.236650    3758 notify.go:220] Checking for updates...
	I0816 10:01:48.262360    3758 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 10:01:48.320415    3758 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1276/kubeconfig
	I0816 10:01:48.381368    3758 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0816 10:01:48.452505    3758 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 10:01:48.474398    3758 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1276/.minikube
	I0816 10:01:48.495459    3758 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 10:01:48.520120    3758 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 10:01:48.550379    3758 out.go:177] * Using the hyperkit driver based on user configuration
	I0816 10:01:48.592331    3758 start.go:297] selected driver: hyperkit
	I0816 10:01:48.592358    3758 start.go:901] validating driver "hyperkit" against <nil>
	I0816 10:01:48.592382    3758 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 10:01:48.596757    3758 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:01:48.596907    3758 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19461-1276/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0816 10:01:48.605545    3758 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0816 10:01:48.609748    3758 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:01:48.609772    3758 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0816 10:01:48.609805    3758 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 10:01:48.610046    3758 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 10:01:48.610094    3758 cni.go:84] Creating CNI manager for ""
	I0816 10:01:48.610103    3758 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0816 10:01:48.610108    3758 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0816 10:01:48.610236    3758 start.go:340] cluster config:
	{Name:ha-286000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-286000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docke
r CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0816 10:01:48.610323    3758 iso.go:125] acquiring lock: {Name:mkb430b43b5e7a8a683454482519837ed996c78d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:01:48.632334    3758 out.go:177] * Starting "ha-286000" primary control-plane node in "ha-286000" cluster
	I0816 10:01:48.653456    3758 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 10:01:48.653537    3758 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0816 10:01:48.653563    3758 cache.go:56] Caching tarball of preloaded images
	I0816 10:01:48.653777    3758 preload.go:172] Found /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0816 10:01:48.653813    3758 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 10:01:48.654332    3758 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:01:48.654370    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json: {Name:mk79fdae2e45cdfc987caaf16c5f211150e90dc5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:01:48.654995    3758 start.go:360] acquireMachinesLock for ha-286000: {Name:mk0510f0432b1e24fd07d899155e85dff8756d5a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:01:48.655106    3758 start.go:364] duration metric: took 87.458µs to acquireMachinesLock for "ha-286000"
	I0816 10:01:48.655154    3758 start.go:93] Provisioning new machine with config: &{Name:ha-286000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:ha-286000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:01:48.655246    3758 start.go:125] createHost starting for "" (driver="hyperkit")
	I0816 10:01:48.677450    3758 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0816 10:01:48.677701    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:01:48.677786    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:01:48.687593    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51004
	I0816 10:01:48.687935    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:01:48.688347    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:01:48.688357    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:01:48.688562    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:01:48.688668    3758 main.go:141] libmachine: (ha-286000) Calling .GetMachineName
	I0816 10:01:48.688765    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:01:48.688895    3758 start.go:159] libmachine.API.Create for "ha-286000" (driver="hyperkit")
	I0816 10:01:48.688916    3758 client.go:168] LocalClient.Create starting
	I0816 10:01:48.688948    3758 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem
	I0816 10:01:48.688999    3758 main.go:141] libmachine: Decoding PEM data...
	I0816 10:01:48.689012    3758 main.go:141] libmachine: Parsing certificate...
	I0816 10:01:48.689063    3758 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem
	I0816 10:01:48.689100    3758 main.go:141] libmachine: Decoding PEM data...
	I0816 10:01:48.689111    3758 main.go:141] libmachine: Parsing certificate...
	I0816 10:01:48.689128    3758 main.go:141] libmachine: Running pre-create checks...
	I0816 10:01:48.689135    3758 main.go:141] libmachine: (ha-286000) Calling .PreCreateCheck
	I0816 10:01:48.689224    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:48.689427    3758 main.go:141] libmachine: (ha-286000) Calling .GetConfigRaw
	I0816 10:01:48.698771    3758 main.go:141] libmachine: Creating machine...
	I0816 10:01:48.698794    3758 main.go:141] libmachine: (ha-286000) Calling .Create
	I0816 10:01:48.699018    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:48.699296    3758 main.go:141] libmachine: (ha-286000) DBG | I0816 10:01:48.699015    3766 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19461-1276/.minikube
	I0816 10:01:48.699450    3758 main.go:141] libmachine: (ha-286000) Downloading /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19461-1276/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0816 10:01:48.884950    3758 main.go:141] libmachine: (ha-286000) DBG | I0816 10:01:48.884833    3766 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa...
	I0816 10:01:48.986348    3758 main.go:141] libmachine: (ha-286000) DBG | I0816 10:01:48.986275    3766 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/ha-286000.rawdisk...
	I0816 10:01:48.986388    3758 main.go:141] libmachine: (ha-286000) DBG | Writing magic tar header
	I0816 10:01:48.986396    3758 main.go:141] libmachine: (ha-286000) DBG | Writing SSH key tar header
	I0816 10:01:48.987038    3758 main.go:141] libmachine: (ha-286000) DBG | I0816 10:01:48.987004    3766 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000 ...
	I0816 10:01:49.360700    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:49.360726    3758 main.go:141] libmachine: (ha-286000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/hyperkit.pid
	I0816 10:01:49.360741    3758 main.go:141] libmachine: (ha-286000) DBG | Using UUID ad96de67-e238-408c-89eb-d74e5b68d297
	I0816 10:01:49.471852    3758 main.go:141] libmachine: (ha-286000) DBG | Generated MAC 66:c8:48:4e:12:1b
	I0816 10:01:49.471873    3758 main.go:141] libmachine: (ha-286000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000
	I0816 10:01:49.471908    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"ad96de67-e238-408c-89eb-d74e5b68d297", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0000b2330)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 10:01:49.471951    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"ad96de67-e238-408c-89eb-d74e5b68d297", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0000b2330)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 10:01:49.471996    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "ad96de67-e238-408c-89eb-d74e5b68d297", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/ha-286000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/bzimage,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000"}
	I0816 10:01:49.472043    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U ad96de67-e238-408c-89eb-d74e5b68d297 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/ha-286000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/console-ring -f kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/bzimage,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000"
	I0816 10:01:49.472063    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0816 10:01:49.475179    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 DEBUG: hyperkit: Pid is 3771
	I0816 10:01:49.475583    3758 main.go:141] libmachine: (ha-286000) DBG | Attempt 0
	I0816 10:01:49.475597    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:49.475674    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:01:49.476510    3758 main.go:141] libmachine: (ha-286000) DBG | Searching for 66:c8:48:4e:12:1b in /var/db/dhcpd_leases ...
	I0816 10:01:49.476583    3758 main.go:141] libmachine: (ha-286000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0816 10:01:49.476592    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:01:49.476602    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:01:49.476612    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:01:49.482826    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0816 10:01:49.534430    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0816 10:01:49.535040    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:01:49.535056    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:01:49.535064    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:01:49.535072    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:01:49.909510    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0816 10:01:49.909527    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0816 10:01:50.024439    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:01:50.024459    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:01:50.024479    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:01:50.024494    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:01:50.025363    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:50 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0816 10:01:50.025375    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:50 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0816 10:01:51.477573    3758 main.go:141] libmachine: (ha-286000) DBG | Attempt 1
	I0816 10:01:51.477587    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:51.477660    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:01:51.478481    3758 main.go:141] libmachine: (ha-286000) DBG | Searching for 66:c8:48:4e:12:1b in /var/db/dhcpd_leases ...
	I0816 10:01:51.478504    3758 main.go:141] libmachine: (ha-286000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0816 10:01:51.478511    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:01:51.478520    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:01:51.478531    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:01:53.479582    3758 main.go:141] libmachine: (ha-286000) DBG | Attempt 2
	I0816 10:01:53.479599    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:53.479760    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:01:53.480630    3758 main.go:141] libmachine: (ha-286000) DBG | Searching for 66:c8:48:4e:12:1b in /var/db/dhcpd_leases ...
	I0816 10:01:53.480678    3758 main.go:141] libmachine: (ha-286000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0816 10:01:53.480688    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:01:53.480697    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:01:53.480710    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:01:55.482614    3758 main.go:141] libmachine: (ha-286000) DBG | Attempt 3
	I0816 10:01:55.482630    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:55.482703    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:01:55.483616    3758 main.go:141] libmachine: (ha-286000) DBG | Searching for 66:c8:48:4e:12:1b in /var/db/dhcpd_leases ...
	I0816 10:01:55.483662    3758 main.go:141] libmachine: (ha-286000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0816 10:01:55.483672    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:01:55.483679    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:01:55.483687    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:01:55.581983    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:55 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0816 10:01:55.582063    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:55 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0816 10:01:55.582075    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:55 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0816 10:01:55.606414    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:55 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0816 10:01:57.484306    3758 main.go:141] libmachine: (ha-286000) DBG | Attempt 4
	I0816 10:01:57.484322    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:57.484414    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:01:57.485196    3758 main.go:141] libmachine: (ha-286000) DBG | Searching for 66:c8:48:4e:12:1b in /var/db/dhcpd_leases ...
	I0816 10:01:57.485261    3758 main.go:141] libmachine: (ha-286000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0816 10:01:57.485273    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:01:57.485286    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:01:57.485294    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:01:59.485978    3758 main.go:141] libmachine: (ha-286000) DBG | Attempt 5
	I0816 10:01:59.486008    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:59.486192    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:01:59.487610    3758 main.go:141] libmachine: (ha-286000) DBG | Searching for 66:c8:48:4e:12:1b in /var/db/dhcpd_leases ...
	I0816 10:01:59.487709    3758 main.go:141] libmachine: (ha-286000) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0816 10:01:59.487730    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:01:59.487784    3758 main.go:141] libmachine: (ha-286000) DBG | Found match: 66:c8:48:4e:12:1b
	I0816 10:01:59.487797    3758 main.go:141] libmachine: (ha-286000) DBG | IP: 192.169.0.5
	I0816 10:01:59.487831    3758 main.go:141] libmachine: (ha-286000) Calling .GetConfigRaw
	I0816 10:01:59.488860    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:01:59.489069    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:01:59.489276    3758 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0816 10:01:59.489289    3758 main.go:141] libmachine: (ha-286000) Calling .GetState
	I0816 10:01:59.489463    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:59.489567    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:01:59.490615    3758 main.go:141] libmachine: Detecting operating system of created instance...
	I0816 10:01:59.490628    3758 main.go:141] libmachine: Waiting for SSH to be available...
	I0816 10:01:59.490644    3758 main.go:141] libmachine: Getting to WaitForSSH function...
	I0816 10:01:59.490652    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:01:59.490782    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:01:59.490923    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:01:59.491055    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:01:59.491190    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:01:59.491362    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:01:59.491595    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:01:59.491605    3758 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0816 10:01:59.511220    3758 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0816 10:02:02.573095    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 10:02:02.573108    3758 main.go:141] libmachine: Detecting the provisioner...
	I0816 10:02:02.573113    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:02.573255    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:02.573385    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:02.573489    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:02.573602    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:02.573753    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:02.573906    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:02:02.573914    3758 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0816 10:02:02.632510    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0816 10:02:02.632561    3758 main.go:141] libmachine: found compatible host: buildroot
	I0816 10:02:02.632568    3758 main.go:141] libmachine: Provisioning with buildroot...
	I0816 10:02:02.632573    3758 main.go:141] libmachine: (ha-286000) Calling .GetMachineName
	I0816 10:02:02.632721    3758 buildroot.go:166] provisioning hostname "ha-286000"
	I0816 10:02:02.632732    3758 main.go:141] libmachine: (ha-286000) Calling .GetMachineName
	I0816 10:02:02.632834    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:02.632956    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:02.633046    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:02.633122    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:02.633236    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:02.633363    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:02.633492    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:02:02.633500    3758 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-286000 && echo "ha-286000" | sudo tee /etc/hostname
	I0816 10:02:02.702336    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-286000
	
	I0816 10:02:02.702354    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:02.702482    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:02.702569    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:02.702670    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:02.702756    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:02.702893    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:02.703040    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:02:02.703052    3758 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-286000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-286000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-286000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 10:02:02.766997    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 10:02:02.767016    3758 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19461-1276/.minikube CaCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19461-1276/.minikube}
	I0816 10:02:02.767026    3758 buildroot.go:174] setting up certificates
	I0816 10:02:02.767038    3758 provision.go:84] configureAuth start
	I0816 10:02:02.767044    3758 main.go:141] libmachine: (ha-286000) Calling .GetMachineName
	I0816 10:02:02.767175    3758 main.go:141] libmachine: (ha-286000) Calling .GetIP
	I0816 10:02:02.767270    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:02.767372    3758 provision.go:143] copyHostCerts
	I0816 10:02:02.767406    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem
	I0816 10:02:02.767476    3758 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem, removing ...
	I0816 10:02:02.767485    3758 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem
	I0816 10:02:02.767630    3758 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem (1679 bytes)
	I0816 10:02:02.767837    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem
	I0816 10:02:02.767868    3758 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem, removing ...
	I0816 10:02:02.767873    3758 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem
	I0816 10:02:02.767991    3758 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem (1078 bytes)
	I0816 10:02:02.768146    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem
	I0816 10:02:02.768179    3758 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem, removing ...
	I0816 10:02:02.768184    3758 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem
	I0816 10:02:02.768268    3758 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem (1123 bytes)
	I0816 10:02:02.768410    3758 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem org=jenkins.ha-286000 san=[127.0.0.1 192.169.0.5 ha-286000 localhost minikube]
	I0816 10:02:02.915225    3758 provision.go:177] copyRemoteCerts
	I0816 10:02:02.915279    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 10:02:02.915299    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:02.915444    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:02.915558    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:02.915640    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:02.915723    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:02:02.953069    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0816 10:02:02.953145    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0816 10:02:02.973516    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0816 10:02:02.973600    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0816 10:02:02.993038    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0816 10:02:02.993100    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 10:02:03.013565    3758 provision.go:87] duration metric: took 246.514857ms to configureAuth
	I0816 10:02:03.013581    3758 buildroot.go:189] setting minikube options for container-runtime
	I0816 10:02:03.013724    3758 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:02:03.013737    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:03.013889    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:03.013978    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:03.014067    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:03.014147    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:03.014250    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:03.014369    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:03.014498    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:02:03.014506    3758 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0816 10:02:03.073645    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0816 10:02:03.073660    3758 buildroot.go:70] root file system type: tmpfs
	I0816 10:02:03.073734    3758 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0816 10:02:03.073745    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:03.073872    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:03.073966    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:03.074063    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:03.074164    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:03.074292    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:03.074429    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:02:03.074479    3758 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0816 10:02:03.148891    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0816 10:02:03.148912    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:03.149052    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:03.149138    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:03.149235    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:03.149324    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:03.149466    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:03.149604    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:02:03.149616    3758 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0816 10:02:04.780498    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0816 10:02:04.780519    3758 main.go:141] libmachine: Checking connection to Docker...
	I0816 10:02:04.780526    3758 main.go:141] libmachine: (ha-286000) Calling .GetURL
	I0816 10:02:04.780691    3758 main.go:141] libmachine: Docker is up and running!
	I0816 10:02:04.780698    3758 main.go:141] libmachine: Reticulating splines...
	I0816 10:02:04.780702    3758 client.go:171] duration metric: took 16.091876944s to LocalClient.Create
	I0816 10:02:04.780713    3758 start.go:167] duration metric: took 16.091916939s to libmachine.API.Create "ha-286000"
	I0816 10:02:04.780722    3758 start.go:293] postStartSetup for "ha-286000" (driver="hyperkit")
	I0816 10:02:04.780729    3758 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 10:02:04.780738    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:04.780873    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 10:02:04.780884    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:04.780956    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:04.781047    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:04.781154    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:04.781275    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:02:04.819650    3758 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 10:02:04.823314    3758 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 10:02:04.823335    3758 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19461-1276/.minikube/addons for local assets ...
	I0816 10:02:04.823443    3758 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19461-1276/.minikube/files for local assets ...
	I0816 10:02:04.823624    3758 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> 18312.pem in /etc/ssl/certs
	I0816 10:02:04.823631    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> /etc/ssl/certs/18312.pem
	I0816 10:02:04.823837    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 10:02:04.831860    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem --> /etc/ssl/certs/18312.pem (1708 bytes)
	I0816 10:02:04.850901    3758 start.go:296] duration metric: took 70.172878ms for postStartSetup
	I0816 10:02:04.850935    3758 main.go:141] libmachine: (ha-286000) Calling .GetConfigRaw
	I0816 10:02:04.851561    3758 main.go:141] libmachine: (ha-286000) Calling .GetIP
	I0816 10:02:04.851702    3758 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:02:04.852053    3758 start.go:128] duration metric: took 16.196888252s to createHost
	I0816 10:02:04.852071    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:04.852167    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:04.852261    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:04.852344    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:04.852420    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:04.852525    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:04.852648    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:02:04.852655    3758 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 10:02:04.911299    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723827724.986438448
	
	I0816 10:02:04.911317    3758 fix.go:216] guest clock: 1723827724.986438448
	I0816 10:02:04.911322    3758 fix.go:229] Guest: 2024-08-16 10:02:04.986438448 -0700 PDT Remote: 2024-08-16 10:02:04.852061 -0700 PDT m=+16.776125584 (delta=134.377448ms)
	I0816 10:02:04.911341    3758 fix.go:200] guest clock delta is within tolerance: 134.377448ms
	I0816 10:02:04.911345    3758 start.go:83] releasing machines lock for "ha-286000", held for 16.256325696s
	I0816 10:02:04.911364    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:04.911498    3758 main.go:141] libmachine: (ha-286000) Calling .GetIP
	I0816 10:02:04.911592    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:04.911942    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:04.912068    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:04.912144    3758 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 10:02:04.912171    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:04.912218    3758 ssh_runner.go:195] Run: cat /version.json
	I0816 10:02:04.912231    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:04.912262    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:04.912305    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:04.912360    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:04.912384    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:04.912437    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:04.912464    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:04.912547    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:02:04.912567    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:02:04.945688    3758 ssh_runner.go:195] Run: systemctl --version
	I0816 10:02:04.997158    3758 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 10:02:05.002278    3758 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 10:02:05.002315    3758 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 10:02:05.015089    3758 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 10:02:05.015098    3758 start.go:495] detecting cgroup driver to use...
	I0816 10:02:05.015195    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 10:02:05.030338    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0816 10:02:05.038588    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0816 10:02:05.046898    3758 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0816 10:02:05.046932    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0816 10:02:05.055211    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 10:02:05.063533    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0816 10:02:05.073244    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 10:02:05.081895    3758 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 10:02:05.090915    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0816 10:02:05.099773    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0816 10:02:05.108719    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0816 10:02:05.117772    3758 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 10:02:05.125574    3758 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 10:02:05.145403    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:05.261568    3758 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0816 10:02:05.278920    3758 start.go:495] detecting cgroup driver to use...
	I0816 10:02:05.278996    3758 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0816 10:02:05.293925    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 10:02:05.306704    3758 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 10:02:05.320000    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 10:02:05.330230    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 10:02:05.340255    3758 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0816 10:02:05.369098    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 10:02:05.379374    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 10:02:05.395120    3758 ssh_runner.go:195] Run: which cri-dockerd
	I0816 10:02:05.397921    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0816 10:02:05.404866    3758 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0816 10:02:05.417935    3758 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0816 10:02:05.514793    3758 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0816 10:02:05.626208    3758 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0816 10:02:05.626292    3758 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0816 10:02:05.640317    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:05.737978    3758 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0816 10:02:08.105430    3758 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.367458496s)
	I0816 10:02:08.105491    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0816 10:02:08.115970    3758 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0816 10:02:08.129325    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0816 10:02:08.140312    3758 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0816 10:02:08.237628    3758 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0816 10:02:08.340285    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:08.436168    3758 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0816 10:02:08.457808    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0816 10:02:08.468314    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:08.561830    3758 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0816 10:02:08.620080    3758 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0816 10:02:08.620164    3758 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0816 10:02:08.624716    3758 start.go:563] Will wait 60s for crictl version
	I0816 10:02:08.624764    3758 ssh_runner.go:195] Run: which crictl
	I0816 10:02:08.627806    3758 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 10:02:08.660437    3758 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0816 10:02:08.660505    3758 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0816 10:02:08.678073    3758 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0816 10:02:08.722165    3758 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0816 10:02:08.722217    3758 main.go:141] libmachine: (ha-286000) Calling .GetIP
	I0816 10:02:08.722605    3758 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0816 10:02:08.727140    3758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 10:02:08.736769    3758 kubeadm.go:883] updating cluster {Name:ha-286000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:ha-286000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 10:02:08.736830    3758 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 10:02:08.736887    3758 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0816 10:02:08.748223    3758 docker.go:685] Got preloaded images: 
	I0816 10:02:08.748236    3758 docker.go:691] registry.k8s.io/kube-apiserver:v1.31.0 wasn't preloaded
	I0816 10:02:08.748294    3758 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0816 10:02:08.755601    3758 ssh_runner.go:195] Run: which lz4
	I0816 10:02:08.758510    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0816 10:02:08.758630    3758 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 10:02:08.761594    3758 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 10:02:08.761610    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (342554258 bytes)
	I0816 10:02:09.771496    3758 docker.go:649] duration metric: took 1.012922149s to copy over tarball
	I0816 10:02:09.771559    3758 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 10:02:12.050040    3758 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.27849665s)
	I0816 10:02:12.050054    3758 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 10:02:12.075438    3758 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0816 10:02:12.083331    3758 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0816 10:02:12.097261    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:12.204348    3758 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0816 10:02:14.516272    3758 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.311936705s)
	I0816 10:02:14.516363    3758 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0816 10:02:14.529500    3758 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0816 10:02:14.529519    3758 cache_images.go:84] Images are preloaded, skipping loading
	I0816 10:02:14.529530    3758 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.0 docker true true} ...
	I0816 10:02:14.529607    3758 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-286000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-286000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 10:02:14.529674    3758 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0816 10:02:14.568093    3758 cni.go:84] Creating CNI manager for ""
	I0816 10:02:14.568107    3758 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0816 10:02:14.568120    3758 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 10:02:14.568134    3758 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-286000 NodeName:ha-286000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 10:02:14.568222    3758 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-286000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 10:02:14.568237    3758 kube-vip.go:115] generating kube-vip config ...
	I0816 10:02:14.568286    3758 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0816 10:02:14.580706    3758 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0816 10:02:14.580778    3758 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0816 10:02:14.580832    3758 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 10:02:14.588124    3758 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 10:02:14.588174    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0816 10:02:14.595283    3758 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0816 10:02:14.608721    3758 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 10:02:14.622036    3758 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0816 10:02:14.635525    3758 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1446 bytes)
	I0816 10:02:14.648868    3758 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0816 10:02:14.651748    3758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 10:02:14.661305    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:14.752561    3758 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 10:02:14.768967    3758 certs.go:68] Setting up /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000 for IP: 192.169.0.5
	I0816 10:02:14.768980    3758 certs.go:194] generating shared ca certs ...
	I0816 10:02:14.768991    3758 certs.go:226] acquiring lock for ca certs: {Name:mka549fbc467849834d2267da204028c49d88a3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:14.769183    3758 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key
	I0816 10:02:14.769258    3758 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key
	I0816 10:02:14.769269    3758 certs.go:256] generating profile certs ...
	I0816 10:02:14.769315    3758 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.key
	I0816 10:02:14.769328    3758 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.crt with IP's: []
	I0816 10:02:14.849573    3758 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.crt ...
	I0816 10:02:14.849589    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.crt: {Name:mk9c6a2b5871e8d33f12e0a58268b028ea37ab4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:14.849911    3758 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.key ...
	I0816 10:02:14.849919    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.key: {Name:mk7d8ed264e583d7ced6b16b60c72c11b15a1f22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:14.850137    3758 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.af99fd6a
	I0816 10:02:14.850151    3758 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.af99fd6a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.254]
	I0816 10:02:14.968129    3758 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.af99fd6a ...
	I0816 10:02:14.968143    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.af99fd6a: {Name:mk3b8945a16852dca8274ec1ab3a9f2eade80831 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:14.968464    3758 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.af99fd6a ...
	I0816 10:02:14.968473    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.af99fd6a: {Name:mk90fcce3348a214c4733fe5a1fb806faff7773f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:14.968717    3758 certs.go:381] copying /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.af99fd6a -> /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt
	I0816 10:02:14.968940    3758 certs.go:385] copying /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.af99fd6a -> /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key
	I0816 10:02:14.969171    3758 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key
	I0816 10:02:14.969185    3758 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.crt with IP's: []
	I0816 10:02:15.029329    3758 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.crt ...
	I0816 10:02:15.029338    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.crt: {Name:mk70aaa3903fb58df2124d779dbea07aa95769f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:15.029604    3758 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key ...
	I0816 10:02:15.029611    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key: {Name:mka3a85354927e8da31f9dfc67f92dea3c77ca65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:15.029850    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0816 10:02:15.029876    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0816 10:02:15.029898    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0816 10:02:15.029940    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0816 10:02:15.029958    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0816 10:02:15.029990    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0816 10:02:15.030008    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0816 10:02:15.030025    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0816 10:02:15.030118    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem (1338 bytes)
	W0816 10:02:15.030163    3758 certs.go:480] ignoring /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831_empty.pem, impossibly tiny 0 bytes
	I0816 10:02:15.030171    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 10:02:15.030240    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem (1078 bytes)
	I0816 10:02:15.030268    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem (1123 bytes)
	I0816 10:02:15.030354    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem (1679 bytes)
	I0816 10:02:15.030467    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem (1708 bytes)
	I0816 10:02:15.030499    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:02:15.030525    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem -> /usr/share/ca-certificates/1831.pem
	I0816 10:02:15.030542    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> /usr/share/ca-certificates/18312.pem
	I0816 10:02:15.031030    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 10:02:15.051618    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0816 10:02:15.071318    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 10:02:15.091017    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 10:02:15.110497    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0816 10:02:15.131033    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 10:02:15.150747    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 10:02:15.170186    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 10:02:15.189808    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 10:02:15.209868    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem --> /usr/share/ca-certificates/1831.pem (1338 bytes)
	I0816 10:02:15.229378    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem --> /usr/share/ca-certificates/18312.pem (1708 bytes)
	I0816 10:02:15.248667    3758 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 10:02:15.262035    3758 ssh_runner.go:195] Run: openssl version
	I0816 10:02:15.266135    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 10:02:15.275959    3758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:02:15.279367    3758 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 16:48 /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:02:15.279403    3758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:02:15.283611    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 10:02:15.291872    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1831.pem && ln -fs /usr/share/ca-certificates/1831.pem /etc/ssl/certs/1831.pem"
	I0816 10:02:15.299890    3758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1831.pem
	I0816 10:02:15.303179    3758 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 16:57 /usr/share/ca-certificates/1831.pem
	I0816 10:02:15.303212    3758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1831.pem
	I0816 10:02:15.307423    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1831.pem /etc/ssl/certs/51391683.0"
	I0816 10:02:15.315741    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18312.pem && ln -fs /usr/share/ca-certificates/18312.pem /etc/ssl/certs/18312.pem"
	I0816 10:02:15.324088    3758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18312.pem
	I0816 10:02:15.327441    3758 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 16:57 /usr/share/ca-certificates/18312.pem
	I0816 10:02:15.327479    3758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18312.pem
	I0816 10:02:15.331723    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18312.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 10:02:15.339921    3758 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 10:02:15.343061    3758 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0816 10:02:15.343109    3758 kubeadm.go:392] StartCluster: {Name:ha-286000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:ha-286000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 10:02:15.343194    3758 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0816 10:02:15.356016    3758 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 10:02:15.363405    3758 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 10:02:15.370635    3758 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 10:02:15.377806    3758 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 10:02:15.377819    3758 kubeadm.go:157] found existing configuration files:
	
	I0816 10:02:15.377855    3758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 10:02:15.384868    3758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 10:02:15.384903    3758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 10:02:15.392001    3758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 10:02:15.398935    3758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 10:02:15.398971    3758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 10:02:15.406181    3758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 10:02:15.413108    3758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 10:02:15.413142    3758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 10:02:15.422603    3758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 10:02:15.435692    3758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 10:02:15.435749    3758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 10:02:15.450920    3758 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 10:02:15.521417    3758 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0816 10:02:15.521501    3758 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 10:02:15.590843    3758 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 10:02:15.590923    3758 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 10:02:15.591008    3758 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0816 10:02:15.600874    3758 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 10:02:15.632377    3758 out.go:235]   - Generating certificates and keys ...
	I0816 10:02:15.632427    3758 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 10:02:15.632475    3758 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 10:02:15.791885    3758 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0816 10:02:15.852803    3758 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0816 10:02:15.961982    3758 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0816 10:02:16.146557    3758 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0816 10:02:16.493401    3758 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0816 10:02:16.493556    3758 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-286000 localhost] and IPs [192.169.0.5 127.0.0.1 ::1]
	I0816 10:02:16.704832    3758 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0816 10:02:16.705108    3758 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-286000 localhost] and IPs [192.169.0.5 127.0.0.1 ::1]
	I0816 10:02:16.922818    3758 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0816 10:02:17.082810    3758 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0816 10:02:17.393939    3758 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0816 10:02:17.394070    3758 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 10:02:17.465159    3758 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 10:02:17.731984    3758 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0816 10:02:18.019833    3758 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 10:02:18.164168    3758 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 10:02:18.273756    3758 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 10:02:18.274215    3758 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 10:02:18.275949    3758 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 10:02:18.300490    3758 out.go:235]   - Booting up control plane ...
	I0816 10:02:18.300569    3758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 10:02:18.300638    3758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 10:02:18.300693    3758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 10:02:18.300780    3758 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 10:02:18.300850    3758 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 10:02:18.300879    3758 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 10:02:18.405245    3758 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0816 10:02:18.405330    3758 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0816 10:02:18.909805    3758 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 504.760374ms
	I0816 10:02:18.909928    3758 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0816 10:02:24.844637    3758 kubeadm.go:310] [api-check] The API server is healthy after 5.938882642s
	I0816 10:02:24.854864    3758 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0816 10:02:24.862134    3758 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0816 10:02:24.890940    3758 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0816 10:02:24.891101    3758 kubeadm.go:310] [mark-control-plane] Marking the node ha-286000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0816 10:02:24.901441    3758 kubeadm.go:310] [bootstrap-token] Using token: 73merd.1elxantqs1p5mkiz
	I0816 10:02:24.939545    3758 out.go:235]   - Configuring RBAC rules ...
	I0816 10:02:24.939737    3758 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0816 10:02:24.944670    3758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0816 10:02:24.951393    3758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0816 10:02:24.953383    3758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0816 10:02:24.955899    3758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0816 10:02:24.958387    3758 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0816 10:02:25.251799    3758 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0816 10:02:25.676108    3758 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0816 10:02:26.249233    3758 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0816 10:02:26.249936    3758 kubeadm.go:310] 
	I0816 10:02:26.250001    3758 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0816 10:02:26.250014    3758 kubeadm.go:310] 
	I0816 10:02:26.250090    3758 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0816 10:02:26.250102    3758 kubeadm.go:310] 
	I0816 10:02:26.250122    3758 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0816 10:02:26.250175    3758 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0816 10:02:26.250221    3758 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0816 10:02:26.250229    3758 kubeadm.go:310] 
	I0816 10:02:26.250268    3758 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0816 10:02:26.250272    3758 kubeadm.go:310] 
	I0816 10:02:26.250305    3758 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0816 10:02:26.250313    3758 kubeadm.go:310] 
	I0816 10:02:26.250359    3758 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0816 10:02:26.250425    3758 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0816 10:02:26.250484    3758 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0816 10:02:26.250491    3758 kubeadm.go:310] 
	I0816 10:02:26.250569    3758 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0816 10:02:26.250648    3758 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0816 10:02:26.250656    3758 kubeadm.go:310] 
	I0816 10:02:26.250716    3758 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 73merd.1elxantqs1p5mkiz \
	I0816 10:02:26.250793    3758 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:995590413ea712c4f7db2cf849d28264ebc291e6cd53d741ce1d22c1daaa1602 \
	I0816 10:02:26.250811    3758 kubeadm.go:310] 	--control-plane 
	I0816 10:02:26.250817    3758 kubeadm.go:310] 
	I0816 10:02:26.250879    3758 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0816 10:02:26.250885    3758 kubeadm.go:310] 
	I0816 10:02:26.250947    3758 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 73merd.1elxantqs1p5mkiz \
	I0816 10:02:26.251036    3758 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:995590413ea712c4f7db2cf849d28264ebc291e6cd53d741ce1d22c1daaa1602 
	I0816 10:02:26.251297    3758 kubeadm.go:310] W0816 17:02:15.599099    1566 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 10:02:26.251521    3758 kubeadm.go:310] W0816 17:02:15.600578    1566 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 10:02:26.251608    3758 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 10:02:26.251620    3758 cni.go:84] Creating CNI manager for ""
	I0816 10:02:26.251624    3758 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0816 10:02:26.289324    3758 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0816 10:02:26.363318    3758 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0816 10:02:26.368971    3758 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0816 10:02:26.368983    3758 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0816 10:02:26.382855    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0816 10:02:26.629869    3758 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 10:02:26.629947    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-286000 minikube.k8s.io/updated_at=2024_08_16T10_02_26_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd minikube.k8s.io/name=ha-286000 minikube.k8s.io/primary=true
	I0816 10:02:26.629949    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 10:02:26.658712    3758 ops.go:34] apiserver oom_adj: -16
	I0816 10:02:26.756402    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 10:02:27.257667    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 10:02:27.757259    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 10:02:28.256864    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 10:02:28.757602    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 10:02:29.256802    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 10:02:29.757282    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 10:02:29.882342    3758 kubeadm.go:1113] duration metric: took 3.252511045s to wait for elevateKubeSystemPrivileges
	I0816 10:02:29.882365    3758 kubeadm.go:394] duration metric: took 14.539506386s to StartCluster
	I0816 10:02:29.882380    3758 settings.go:142] acquiring lock: {Name:mk60e377244eac756871b74b6bd45115654652a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:29.882471    3758 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19461-1276/kubeconfig
	I0816 10:02:29.882922    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/kubeconfig: {Name:mk7f4491c8ec5cf96c96a5a1a5dbfba4301394a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:29.883167    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0816 10:02:29.883170    3758 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:02:29.883182    3758 start.go:241] waiting for startup goroutines ...
	I0816 10:02:29.883204    3758 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 10:02:29.883238    3758 addons.go:69] Setting storage-provisioner=true in profile "ha-286000"
	I0816 10:02:29.883244    3758 addons.go:69] Setting default-storageclass=true in profile "ha-286000"
	I0816 10:02:29.883261    3758 addons.go:234] Setting addon storage-provisioner=true in "ha-286000"
	I0816 10:02:29.883278    3758 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-286000"
	I0816 10:02:29.883280    3758 host.go:66] Checking if "ha-286000" exists ...
	I0816 10:02:29.883305    3758 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:02:29.883558    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:02:29.883573    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:02:29.883575    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:02:29.883598    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:02:29.892969    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51028
	I0816 10:02:29.893238    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51030
	I0816 10:02:29.893356    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:02:29.893605    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:02:29.893730    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:02:29.893741    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:02:29.893938    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:02:29.893951    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:02:29.893976    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:02:29.894142    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:02:29.894254    3758 main.go:141] libmachine: (ha-286000) Calling .GetState
	I0816 10:02:29.894338    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:29.894365    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:02:29.894397    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:02:29.894424    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:02:29.896690    3758 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19461-1276/kubeconfig
	I0816 10:02:29.896968    3758 kapi.go:59] client config for ha-286000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.key", CAFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xa202f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0816 10:02:29.897373    3758 cert_rotation.go:140] Starting client certificate rotation controller
	I0816 10:02:29.897530    3758 addons.go:234] Setting addon default-storageclass=true in "ha-286000"
	I0816 10:02:29.897550    3758 host.go:66] Checking if "ha-286000" exists ...
	I0816 10:02:29.897771    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:02:29.897789    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:02:29.903669    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51032
	I0816 10:02:29.904077    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:02:29.904431    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:02:29.904451    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:02:29.904736    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:02:29.904889    3758 main.go:141] libmachine: (ha-286000) Calling .GetState
	I0816 10:02:29.904993    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:29.905054    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:02:29.906086    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:29.906730    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51034
	I0816 10:02:29.907081    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:02:29.907463    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:02:29.907491    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:02:29.907694    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:02:29.908045    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:02:29.908061    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:02:29.917300    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51036
	I0816 10:02:29.917667    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:02:29.918020    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:02:29.918034    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:02:29.918277    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:02:29.918384    3758 main.go:141] libmachine: (ha-286000) Calling .GetState
	I0816 10:02:29.918483    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:29.918572    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:02:29.919534    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:29.919671    3758 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 10:02:29.919680    3758 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 10:02:29.919688    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:29.919789    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:29.919896    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:29.919990    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:29.920072    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:02:29.928735    3758 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 10:02:29.965711    3758 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 10:02:29.965725    3758 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 10:02:29.965744    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:29.965926    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:29.966043    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:29.966151    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:29.966280    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:02:30.033245    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.169.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0816 10:02:30.037035    3758 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 10:02:30.090040    3758 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 10:02:30.396267    3758 start.go:971] {"host.minikube.internal": 192.169.0.1} host record injected into CoreDNS's ConfigMap
	I0816 10:02:30.396311    3758 main.go:141] libmachine: Making call to close driver server
	I0816 10:02:30.396323    3758 main.go:141] libmachine: (ha-286000) Calling .Close
	I0816 10:02:30.396482    3758 main.go:141] libmachine: Successfully made call to close driver server
	I0816 10:02:30.396488    3758 main.go:141] libmachine: (ha-286000) DBG | Closing plugin on server side
	I0816 10:02:30.396492    3758 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 10:02:30.396501    3758 main.go:141] libmachine: Making call to close driver server
	I0816 10:02:30.396507    3758 main.go:141] libmachine: (ha-286000) Calling .Close
	I0816 10:02:30.396655    3758 main.go:141] libmachine: Successfully made call to close driver server
	I0816 10:02:30.396662    3758 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 10:02:30.396713    3758 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0816 10:02:30.396725    3758 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0816 10:02:30.396804    3758 round_trippers.go:463] GET https://192.169.0.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0816 10:02:30.396810    3758 round_trippers.go:469] Request Headers:
	I0816 10:02:30.396817    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:02:30.396822    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:02:30.402286    3758 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0816 10:02:30.402706    3758 round_trippers.go:463] PUT https://192.169.0.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0816 10:02:30.402713    3758 round_trippers.go:469] Request Headers:
	I0816 10:02:30.402718    3758 round_trippers.go:473]     Content-Type: application/json
	I0816 10:02:30.402721    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:02:30.402723    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:02:30.404290    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:02:30.404403    3758 main.go:141] libmachine: Making call to close driver server
	I0816 10:02:30.404416    3758 main.go:141] libmachine: (ha-286000) Calling .Close
	I0816 10:02:30.404565    3758 main.go:141] libmachine: Successfully made call to close driver server
	I0816 10:02:30.404575    3758 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 10:02:30.404586    3758 main.go:141] libmachine: (ha-286000) DBG | Closing plugin on server side
	I0816 10:02:30.497806    3758 main.go:141] libmachine: Making call to close driver server
	I0816 10:02:30.497818    3758 main.go:141] libmachine: (ha-286000) Calling .Close
	I0816 10:02:30.498006    3758 main.go:141] libmachine: Successfully made call to close driver server
	I0816 10:02:30.498011    3758 main.go:141] libmachine: (ha-286000) DBG | Closing plugin on server side
	I0816 10:02:30.498016    3758 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 10:02:30.498023    3758 main.go:141] libmachine: Making call to close driver server
	I0816 10:02:30.498040    3758 main.go:141] libmachine: (ha-286000) Calling .Close
	I0816 10:02:30.498160    3758 main.go:141] libmachine: Successfully made call to close driver server
	I0816 10:02:30.498168    3758 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 10:02:30.498179    3758 main.go:141] libmachine: (ha-286000) DBG | Closing plugin on server side
	I0816 10:02:30.538607    3758 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0816 10:02:30.596522    3758 addons.go:510] duration metric: took 713.336883ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0816 10:02:30.596578    3758 start.go:246] waiting for cluster config update ...
	I0816 10:02:30.596599    3758 start.go:255] writing updated cluster config ...
	I0816 10:02:30.634683    3758 out.go:201] 
	I0816 10:02:30.655754    3758 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:02:30.655861    3758 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:02:30.677634    3758 out.go:177] * Starting "ha-286000-m02" control-plane node in "ha-286000" cluster
	I0816 10:02:30.737443    3758 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 10:02:30.737475    3758 cache.go:56] Caching tarball of preloaded images
	I0816 10:02:30.737660    3758 preload.go:172] Found /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0816 10:02:30.737679    3758 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 10:02:30.737771    3758 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:02:30.738414    3758 start.go:360] acquireMachinesLock for ha-286000-m02: {Name:mk0510f0432b1e24fd07d899155e85dff8756d5a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:02:30.738494    3758 start.go:364] duration metric: took 63.585µs to acquireMachinesLock for "ha-286000-m02"
	I0816 10:02:30.738517    3758 start.go:93] Provisioning new machine with config: &{Name:ha-286000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:ha-286000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:02:30.738590    3758 start.go:125] createHost starting for "m02" (driver="hyperkit")
	I0816 10:02:30.760743    3758 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0816 10:02:30.760905    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:02:30.760935    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:02:30.771028    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51041
	I0816 10:02:30.771383    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:02:30.771745    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:02:30.771760    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:02:30.771967    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:02:30.772077    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetMachineName
	I0816 10:02:30.772157    3758 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:02:30.772261    3758 start.go:159] libmachine.API.Create for "ha-286000" (driver="hyperkit")
	I0816 10:02:30.772276    3758 client.go:168] LocalClient.Create starting
	I0816 10:02:30.772303    3758 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem
	I0816 10:02:30.772358    3758 main.go:141] libmachine: Decoding PEM data...
	I0816 10:02:30.772368    3758 main.go:141] libmachine: Parsing certificate...
	I0816 10:02:30.772413    3758 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem
	I0816 10:02:30.772450    3758 main.go:141] libmachine: Decoding PEM data...
	I0816 10:02:30.772460    3758 main.go:141] libmachine: Parsing certificate...
	I0816 10:02:30.772472    3758 main.go:141] libmachine: Running pre-create checks...
	I0816 10:02:30.772477    3758 main.go:141] libmachine: (ha-286000-m02) Calling .PreCreateCheck
	I0816 10:02:30.772545    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:30.772568    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetConfigRaw
	I0816 10:02:30.781772    3758 main.go:141] libmachine: Creating machine...
	I0816 10:02:30.781789    3758 main.go:141] libmachine: (ha-286000-m02) Calling .Create
	I0816 10:02:30.781943    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:30.782181    3758 main.go:141] libmachine: (ha-286000-m02) DBG | I0816 10:02:30.781934    3805 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19461-1276/.minikube
	I0816 10:02:30.782269    3758 main.go:141] libmachine: (ha-286000-m02) Downloading /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19461-1276/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0816 10:02:30.982082    3758 main.go:141] libmachine: (ha-286000-m02) DBG | I0816 10:02:30.982020    3805 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/id_rsa...
	I0816 10:02:31.266609    3758 main.go:141] libmachine: (ha-286000-m02) DBG | I0816 10:02:31.266514    3805 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/ha-286000-m02.rawdisk...
	I0816 10:02:31.266629    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Writing magic tar header
	I0816 10:02:31.266638    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Writing SSH key tar header
	I0816 10:02:31.267382    3758 main.go:141] libmachine: (ha-286000-m02) DBG | I0816 10:02:31.267242    3805 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02 ...
	I0816 10:02:31.642864    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:31.642889    3758 main.go:141] libmachine: (ha-286000-m02) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/hyperkit.pid
	I0816 10:02:31.642912    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Using UUID f7630301-2bc3-4935-a751-ac72999da031
	I0816 10:02:31.669349    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Generated MAC 72:69:8f:11:68:1d
	I0816 10:02:31.669369    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000
	I0816 10:02:31.669397    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"f7630301-2bc3-4935-a751-ac72999da031", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 10:02:31.669426    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"f7630301-2bc3-4935-a751-ac72999da031", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 10:02:31.669455    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "f7630301-2bc3-4935-a751-ac72999da031", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/ha-286000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/bzimage,/Users/jenkins/minikube-integration/19461-1276/.minikube/machine
s/ha-286000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000"}
	I0816 10:02:31.669484    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U f7630301-2bc3-4935-a751-ac72999da031 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/ha-286000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/bzimage,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000"
	I0816 10:02:31.669499    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0816 10:02:31.672492    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 DEBUG: hyperkit: Pid is 3806
	I0816 10:02:31.672957    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Attempt 0
	I0816 10:02:31.673000    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:31.673054    3758 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid from json: 3806
	I0816 10:02:31.674014    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Searching for 72:69:8f:11:68:1d in /var/db/dhcpd_leases ...
	I0816 10:02:31.674086    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0816 10:02:31.674098    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:02:31.674120    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:02:31.674129    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:02:31.674137    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:02:31.680025    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0816 10:02:31.688109    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0816 10:02:31.688968    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:02:31.688993    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:02:31.689006    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:02:31.689016    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:02:32.077133    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:32 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0816 10:02:32.077153    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:32 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0816 10:02:32.191789    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:02:32.191806    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:02:32.191814    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:02:32.191825    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:02:32.192676    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:32 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0816 10:02:32.192686    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:32 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0816 10:02:33.675165    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Attempt 1
	I0816 10:02:33.675183    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:33.675297    3758 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid from json: 3806
	I0816 10:02:33.676089    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Searching for 72:69:8f:11:68:1d in /var/db/dhcpd_leases ...
	I0816 10:02:33.676141    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0816 10:02:33.676153    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:02:33.676162    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:02:33.676169    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:02:33.676193    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:02:35.676266    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Attempt 2
	I0816 10:02:35.676285    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:35.676360    3758 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid from json: 3806
	I0816 10:02:35.677182    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Searching for 72:69:8f:11:68:1d in /var/db/dhcpd_leases ...
	I0816 10:02:35.677237    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0816 10:02:35.677248    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:02:35.677262    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:02:35.677270    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:02:35.677281    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:02:37.678515    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Attempt 3
	I0816 10:02:37.678532    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:37.678572    3758 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid from json: 3806
	I0816 10:02:37.679405    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Searching for 72:69:8f:11:68:1d in /var/db/dhcpd_leases ...
	I0816 10:02:37.679441    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0816 10:02:37.679451    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:02:37.679461    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:02:37.679468    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:02:37.679483    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:02:37.782496    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:37 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0816 10:02:37.782531    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:37 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0816 10:02:37.782540    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:37 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0816 10:02:37.806630    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:37 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0816 10:02:39.680161    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Attempt 4
	I0816 10:02:39.680178    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:39.680273    3758 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid from json: 3806
	I0816 10:02:39.681064    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Searching for 72:69:8f:11:68:1d in /var/db/dhcpd_leases ...
	I0816 10:02:39.681112    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0816 10:02:39.681136    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:02:39.681155    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:02:39.681170    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:02:39.681181    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:02:41.681380    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Attempt 5
	I0816 10:02:41.681414    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:41.681563    3758 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid from json: 3806
	I0816 10:02:41.682343    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Searching for 72:69:8f:11:68:1d in /var/db/dhcpd_leases ...
	I0816 10:02:41.682392    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0816 10:02:41.682406    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0d7b0}
	I0816 10:02:41.682415    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Found match: 72:69:8f:11:68:1d
	I0816 10:02:41.682421    3758 main.go:141] libmachine: (ha-286000-m02) DBG | IP: 192.169.0.6
	I0816 10:02:41.682475    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetConfigRaw
	I0816 10:02:41.683135    3758 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:02:41.683257    3758 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:02:41.683358    3758 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0816 10:02:41.683367    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetState
	I0816 10:02:41.683478    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:41.683537    3758 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid from json: 3806
	I0816 10:02:41.684414    3758 main.go:141] libmachine: Detecting operating system of created instance...
	I0816 10:02:41.684423    3758 main.go:141] libmachine: Waiting for SSH to be available...
	I0816 10:02:41.684427    3758 main.go:141] libmachine: Getting to WaitForSSH function...
	I0816 10:02:41.684431    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:41.684566    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:41.684692    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:41.684809    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:41.684943    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:41.685095    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:41.685300    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:02:41.685308    3758 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0816 10:02:42.746559    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 10:02:42.746571    3758 main.go:141] libmachine: Detecting the provisioner...
	I0816 10:02:42.746577    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:42.746714    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:42.746824    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:42.746914    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:42.746988    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:42.747112    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:42.747269    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:02:42.747277    3758 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0816 10:02:42.811630    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0816 10:02:42.811666    3758 main.go:141] libmachine: found compatible host: buildroot
	I0816 10:02:42.811672    3758 main.go:141] libmachine: Provisioning with buildroot...
	I0816 10:02:42.811681    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetMachineName
	I0816 10:02:42.811816    3758 buildroot.go:166] provisioning hostname "ha-286000-m02"
	I0816 10:02:42.811828    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetMachineName
	I0816 10:02:42.811943    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:42.812045    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:42.812136    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:42.812274    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:42.812376    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:42.812513    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:42.812673    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:02:42.812682    3758 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-286000-m02 && echo "ha-286000-m02" | sudo tee /etc/hostname
	I0816 10:02:42.887531    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-286000-m02
	
	I0816 10:02:42.887545    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:42.887676    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:42.887769    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:42.887867    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:42.887953    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:42.888075    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:42.888214    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:02:42.888225    3758 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-286000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-286000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-286000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 10:02:42.959625    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 10:02:42.959641    3758 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19461-1276/.minikube CaCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19461-1276/.minikube}
	I0816 10:02:42.959649    3758 buildroot.go:174] setting up certificates
	I0816 10:02:42.959661    3758 provision.go:84] configureAuth start
	I0816 10:02:42.959666    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetMachineName
	I0816 10:02:42.959803    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetIP
	I0816 10:02:42.959899    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:42.959980    3758 provision.go:143] copyHostCerts
	I0816 10:02:42.960007    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem
	I0816 10:02:42.960055    3758 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem, removing ...
	I0816 10:02:42.960061    3758 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem
	I0816 10:02:42.960954    3758 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem (1078 bytes)
	I0816 10:02:42.961160    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem
	I0816 10:02:42.961190    3758 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem, removing ...
	I0816 10:02:42.961195    3758 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem
	I0816 10:02:42.961273    3758 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem (1123 bytes)
	I0816 10:02:42.961439    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem
	I0816 10:02:42.961509    3758 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem, removing ...
	I0816 10:02:42.961515    3758 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem
	I0816 10:02:42.961594    3758 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem (1679 bytes)
	I0816 10:02:42.961746    3758 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem org=jenkins.ha-286000-m02 san=[127.0.0.1 192.169.0.6 ha-286000-m02 localhost minikube]
	I0816 10:02:43.093511    3758 provision.go:177] copyRemoteCerts
	I0816 10:02:43.093556    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 10:02:43.093572    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:43.093719    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:43.093818    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:43.093917    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:43.094005    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/id_rsa Username:docker}
	I0816 10:02:43.132229    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0816 10:02:43.132299    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0816 10:02:43.151814    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0816 10:02:43.151873    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0816 10:02:43.171464    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0816 10:02:43.171532    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 10:02:43.191045    3758 provision.go:87] duration metric: took 231.381757ms to configureAuth
	I0816 10:02:43.191057    3758 buildroot.go:189] setting minikube options for container-runtime
	I0816 10:02:43.191187    3758 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:02:43.191200    3758 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:02:43.191321    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:43.191410    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:43.191497    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:43.191580    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:43.191658    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:43.191759    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:43.191891    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:02:43.191898    3758 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0816 10:02:43.256733    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0816 10:02:43.256744    3758 buildroot.go:70] root file system type: tmpfs
	I0816 10:02:43.256823    3758 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0816 10:02:43.256835    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:43.256961    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:43.257048    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:43.257142    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:43.257220    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:43.257364    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:43.257505    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:02:43.257553    3758 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0816 10:02:43.330709    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0816 10:02:43.330729    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:43.330874    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:43.330977    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:43.331073    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:43.331167    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:43.331293    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:43.331440    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:02:43.331451    3758 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0816 10:02:44.884634    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0816 10:02:44.884648    3758 main.go:141] libmachine: Checking connection to Docker...
	I0816 10:02:44.884655    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetURL
	I0816 10:02:44.884793    3758 main.go:141] libmachine: Docker is up and running!
	I0816 10:02:44.884799    3758 main.go:141] libmachine: Reticulating splines...
	I0816 10:02:44.884804    3758 client.go:171] duration metric: took 14.112778669s to LocalClient.Create
	I0816 10:02:44.884820    3758 start.go:167] duration metric: took 14.112810851s to libmachine.API.Create "ha-286000"
	I0816 10:02:44.884826    3758 start.go:293] postStartSetup for "ha-286000-m02" (driver="hyperkit")
	I0816 10:02:44.884832    3758 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 10:02:44.884843    3758 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:02:44.884991    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 10:02:44.885004    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:44.885101    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:44.885204    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:44.885335    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:44.885428    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/id_rsa Username:docker}
	I0816 10:02:44.925701    3758 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 10:02:44.929599    3758 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 10:02:44.929613    3758 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19461-1276/.minikube/addons for local assets ...
	I0816 10:02:44.929722    3758 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19461-1276/.minikube/files for local assets ...
	I0816 10:02:44.929908    3758 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> 18312.pem in /etc/ssl/certs
	I0816 10:02:44.929914    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> /etc/ssl/certs/18312.pem
	I0816 10:02:44.930119    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 10:02:44.940484    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem --> /etc/ssl/certs/18312.pem (1708 bytes)
	I0816 10:02:44.973200    3758 start.go:296] duration metric: took 88.36731ms for postStartSetup
	I0816 10:02:44.973230    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetConfigRaw
	I0816 10:02:44.973848    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetIP
	I0816 10:02:44.973996    3758 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:02:44.974351    3758 start.go:128] duration metric: took 14.23599979s to createHost
	I0816 10:02:44.974366    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:44.974448    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:44.974568    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:44.974660    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:44.974744    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:44.974844    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:44.974966    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:02:44.974973    3758 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 10:02:45.036267    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723827764.932365297
	
	I0816 10:02:45.036285    3758 fix.go:216] guest clock: 1723827764.932365297
	I0816 10:02:45.036291    3758 fix.go:229] Guest: 2024-08-16 10:02:44.932365297 -0700 PDT Remote: 2024-08-16 10:02:44.97436 -0700 PDT m=+56.899083401 (delta=-41.994703ms)
	I0816 10:02:45.036301    3758 fix.go:200] guest clock delta is within tolerance: -41.994703ms
	I0816 10:02:45.036306    3758 start.go:83] releasing machines lock for "ha-286000-m02", held for 14.298063452s
	I0816 10:02:45.036338    3758 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:02:45.036468    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetIP
	I0816 10:02:45.062334    3758 out.go:177] * Found network options:
	I0816 10:02:45.105996    3758 out.go:177]   - NO_PROXY=192.169.0.5
	W0816 10:02:45.128174    3758 proxy.go:119] fail to check proxy env: Error ip not in block
	I0816 10:02:45.128216    3758 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:02:45.128829    3758 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:02:45.128969    3758 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:02:45.129060    3758 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 10:02:45.129088    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	W0816 10:02:45.129115    3758 proxy.go:119] fail to check proxy env: Error ip not in block
	I0816 10:02:45.129222    3758 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0816 10:02:45.129232    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:45.129238    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:45.129370    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:45.129393    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:45.129500    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:45.129515    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:45.129631    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/id_rsa Username:docker}
	I0816 10:02:45.129647    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:45.129727    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/id_rsa Username:docker}
	W0816 10:02:45.164265    3758 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 10:02:45.164324    3758 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 10:02:45.211468    3758 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 10:02:45.211489    3758 start.go:495] detecting cgroup driver to use...
	I0816 10:02:45.211595    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 10:02:45.228144    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0816 10:02:45.237310    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0816 10:02:45.246272    3758 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0816 10:02:45.246316    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0816 10:02:45.255987    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 10:02:45.265400    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0816 10:02:45.274661    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 10:02:45.283628    3758 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 10:02:45.292648    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0816 10:02:45.302207    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0816 10:02:45.311314    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0816 10:02:45.320414    3758 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 10:02:45.328532    3758 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 10:02:45.337229    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:45.444954    3758 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0816 10:02:45.464569    3758 start.go:495] detecting cgroup driver to use...
	I0816 10:02:45.464652    3758 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0816 10:02:45.488292    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 10:02:45.500162    3758 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 10:02:45.561835    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 10:02:45.572027    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 10:02:45.582122    3758 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0816 10:02:45.603011    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 10:02:45.613488    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 10:02:45.628434    3758 ssh_runner.go:195] Run: which cri-dockerd
	I0816 10:02:45.631358    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0816 10:02:45.638496    3758 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0816 10:02:45.652676    3758 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0816 10:02:45.755971    3758 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0816 10:02:45.860533    3758 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0816 10:02:45.860559    3758 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0816 10:02:45.874570    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:45.976095    3758 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0816 10:02:48.459358    3758 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.483288325s)
	I0816 10:02:48.459417    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0816 10:02:48.469882    3758 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0816 10:02:48.484429    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0816 10:02:48.495038    3758 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0816 10:02:48.597129    3758 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0816 10:02:48.691412    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:48.797592    3758 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0816 10:02:48.812075    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0816 10:02:48.823307    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:48.918652    3758 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0816 10:02:48.980191    3758 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0816 10:02:48.980257    3758 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0816 10:02:48.984954    3758 start.go:563] Will wait 60s for crictl version
	I0816 10:02:48.985011    3758 ssh_runner.go:195] Run: which crictl
	I0816 10:02:48.988185    3758 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 10:02:49.015262    3758 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0816 10:02:49.015344    3758 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0816 10:02:49.032341    3758 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0816 10:02:49.078128    3758 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0816 10:02:49.137645    3758 out.go:177]   - env NO_PROXY=192.169.0.5
	I0816 10:02:49.176661    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetIP
	I0816 10:02:49.177129    3758 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0816 10:02:49.181778    3758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 10:02:49.191522    3758 mustload.go:65] Loading cluster: ha-286000
	I0816 10:02:49.191665    3758 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:02:49.191885    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:02:49.191909    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:02:49.200721    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51064
	I0816 10:02:49.201069    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:02:49.201413    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:02:49.201424    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:02:49.201648    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:02:49.201776    3758 main.go:141] libmachine: (ha-286000) Calling .GetState
	I0816 10:02:49.201862    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:49.201960    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:02:49.202920    3758 host.go:66] Checking if "ha-286000" exists ...
	I0816 10:02:49.203188    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:02:49.203205    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:02:49.211926    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51066
	I0816 10:02:49.212258    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:02:49.212635    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:02:49.212652    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:02:49.212860    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:02:49.212967    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:49.213056    3758 certs.go:68] Setting up /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000 for IP: 192.169.0.6
	I0816 10:02:49.213061    3758 certs.go:194] generating shared ca certs ...
	I0816 10:02:49.213071    3758 certs.go:226] acquiring lock for ca certs: {Name:mka549fbc467849834d2267da204028c49d88a3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:49.213229    3758 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key
	I0816 10:02:49.213305    3758 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key
	I0816 10:02:49.213314    3758 certs.go:256] generating profile certs ...
	I0816 10:02:49.213406    3758 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.key
	I0816 10:02:49.213429    3758 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.5eb44438
	I0816 10:02:49.213443    3758 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.5eb44438 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.254]
	I0816 10:02:49.263058    3758 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.5eb44438 ...
	I0816 10:02:49.263077    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.5eb44438: {Name:mk266d5e842df442c104f85113e06d171d5a89ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:49.263395    3758 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.5eb44438 ...
	I0816 10:02:49.263404    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.5eb44438: {Name:mk1428912a4c1edaafe55f9bff302c19ad337785 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:49.263623    3758 certs.go:381] copying /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.5eb44438 -> /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt
	I0816 10:02:49.263846    3758 certs.go:385] copying /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.5eb44438 -> /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key
	I0816 10:02:49.264093    3758 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key
	I0816 10:02:49.264103    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0816 10:02:49.264125    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0816 10:02:49.264143    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0816 10:02:49.264163    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0816 10:02:49.264180    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0816 10:02:49.264199    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0816 10:02:49.264223    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0816 10:02:49.264242    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0816 10:02:49.264331    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem (1338 bytes)
	W0816 10:02:49.264378    3758 certs.go:480] ignoring /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831_empty.pem, impossibly tiny 0 bytes
	I0816 10:02:49.264386    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 10:02:49.264418    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem (1078 bytes)
	I0816 10:02:49.264449    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem (1123 bytes)
	I0816 10:02:49.264477    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem (1679 bytes)
	I0816 10:02:49.264545    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem (1708 bytes)
	I0816 10:02:49.264593    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> /usr/share/ca-certificates/18312.pem
	I0816 10:02:49.264614    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:02:49.264634    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem -> /usr/share/ca-certificates/1831.pem
	I0816 10:02:49.264663    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:49.264814    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:49.264897    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:49.265014    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:49.265108    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:02:49.296192    3758 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0816 10:02:49.299577    3758 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0816 10:02:49.312765    3758 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0816 10:02:49.316551    3758 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0816 10:02:49.332167    3758 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0816 10:02:49.336199    3758 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0816 10:02:49.349166    3758 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0816 10:02:49.354242    3758 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0816 10:02:49.364119    3758 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0816 10:02:49.367349    3758 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0816 10:02:49.375423    3758 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0816 10:02:49.378717    3758 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0816 10:02:49.386918    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 10:02:49.408036    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0816 10:02:49.428163    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 10:02:49.448952    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 10:02:49.468986    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0816 10:02:49.489674    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0816 10:02:49.509597    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 10:02:49.530175    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 10:02:49.550578    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem --> /usr/share/ca-certificates/18312.pem (1708 bytes)
	I0816 10:02:49.571266    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 10:02:49.590995    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem --> /usr/share/ca-certificates/1831.pem (1338 bytes)
	I0816 10:02:49.611766    3758 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0816 10:02:49.625222    3758 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0816 10:02:49.638852    3758 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0816 10:02:49.653497    3758 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0816 10:02:49.667098    3758 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0816 10:02:49.680935    3758 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0816 10:02:49.695437    3758 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0816 10:02:49.708684    3758 ssh_runner.go:195] Run: openssl version
	I0816 10:02:49.712979    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18312.pem && ln -fs /usr/share/ca-certificates/18312.pem /etc/ssl/certs/18312.pem"
	I0816 10:02:49.721565    3758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18312.pem
	I0816 10:02:49.725513    3758 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 16:57 /usr/share/ca-certificates/18312.pem
	I0816 10:02:49.725580    3758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18312.pem
	I0816 10:02:49.730364    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18312.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 10:02:49.739184    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 10:02:49.747494    3758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:02:49.750923    3758 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 16:48 /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:02:49.750955    3758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:02:49.755195    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 10:02:49.763703    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1831.pem && ln -fs /usr/share/ca-certificates/1831.pem /etc/ssl/certs/1831.pem"
	I0816 10:02:49.772914    3758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1831.pem
	I0816 10:02:49.776377    3758 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 16:57 /usr/share/ca-certificates/1831.pem
	I0816 10:02:49.776421    3758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1831.pem
	I0816 10:02:49.780731    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1831.pem /etc/ssl/certs/51391683.0"
	I0816 10:02:49.789078    3758 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 10:02:49.792242    3758 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0816 10:02:49.792277    3758 kubeadm.go:934] updating node {m02 192.169.0.6 8443 v1.31.0 docker true true} ...
	I0816 10:02:49.792332    3758 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-286000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-286000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 10:02:49.792349    3758 kube-vip.go:115] generating kube-vip config ...
	I0816 10:02:49.792381    3758 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0816 10:02:49.804469    3758 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0816 10:02:49.804511    3758 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0816 10:02:49.804567    3758 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 10:02:49.812921    3758 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0816 10:02:49.812983    3758 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0816 10:02:49.821031    3758 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256 -> /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubelet
	I0816 10:02:49.821035    3758 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256 -> /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubeadm
	I0816 10:02:49.821039    3758 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubectl
	I0816 10:02:51.911279    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0816 10:02:51.911374    3758 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0816 10:02:51.915115    3758 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0816 10:02:51.915150    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0816 10:02:52.537289    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0816 10:02:52.537383    3758 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0816 10:02:52.540898    3758 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0816 10:02:52.540929    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0816 10:02:53.267467    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 10:02:53.278589    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0816 10:02:53.278731    3758 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0816 10:02:53.282055    3758 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0816 10:02:53.282073    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0816 10:02:53.498714    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0816 10:02:53.506112    3758 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0816 10:02:53.519672    3758 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 10:02:53.533551    3758 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0816 10:02:53.548024    3758 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0816 10:02:53.551124    3758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 10:02:53.560470    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:53.659270    3758 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 10:02:53.674625    3758 host.go:66] Checking if "ha-286000" exists ...
	I0816 10:02:53.674900    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:02:53.674923    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:02:53.683891    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51093
	I0816 10:02:53.684256    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:02:53.684641    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:02:53.684658    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:02:53.684885    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:02:53.685003    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:53.685084    3758 start.go:317] joinCluster: &{Name:ha-286000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clu
sterName:ha-286000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpira
tion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 10:02:53.685155    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0816 10:02:53.685167    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:53.685248    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:53.685339    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:53.685423    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:53.685503    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:02:53.765911    3758 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:02:53.765972    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token mpmdnf.6rzc0wpb01e0t6lz --discovery-token-ca-cert-hash sha256:995590413ea712c4f7db2cf849d28264ebc291e6cd53d741ce1d22c1daaa1602 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-286000-m02 --control-plane --apiserver-advertise-address=192.169.0.6 --apiserver-bind-port=8443"
	I0816 10:03:22.070391    3758 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token mpmdnf.6rzc0wpb01e0t6lz --discovery-token-ca-cert-hash sha256:995590413ea712c4f7db2cf849d28264ebc291e6cd53d741ce1d22c1daaa1602 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-286000-m02 --control-plane --apiserver-advertise-address=192.169.0.6 --apiserver-bind-port=8443": (28.304928658s)
	I0816 10:03:22.070414    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0816 10:03:22.427732    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-286000-m02 minikube.k8s.io/updated_at=2024_08_16T10_03_22_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd minikube.k8s.io/name=ha-286000 minikube.k8s.io/primary=false
	I0816 10:03:22.531211    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-286000-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0816 10:03:22.629050    3758 start.go:319] duration metric: took 28.944509117s to joinCluster
	I0816 10:03:22.629105    3758 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:03:22.629360    3758 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:03:22.652598    3758 out.go:177] * Verifying Kubernetes components...
	I0816 10:03:22.726472    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:03:22.952731    3758 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 10:03:22.975401    3758 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19461-1276/kubeconfig
	I0816 10:03:22.975621    3758 kapi.go:59] client config for ha-286000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.key", CAFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xa202f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0816 10:03:22.975663    3758 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0816 10:03:22.975836    3758 node_ready.go:35] waiting up to 6m0s for node "ha-286000-m02" to be "Ready" ...
	I0816 10:03:22.975886    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:22.975891    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:22.975897    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:22.975901    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:22.983180    3758 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0816 10:03:23.476314    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:23.476365    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:23.476380    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:23.476394    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:23.502874    3758 round_trippers.go:574] Response Status: 200 OK in 26 milliseconds
	I0816 10:03:23.977290    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:23.977304    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:23.977311    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:23.977315    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:23.979495    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:24.477835    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:24.477851    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:24.477858    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:24.477861    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:24.480701    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:24.977514    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:24.977568    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:24.977580    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:24.977588    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:24.980856    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:24.981299    3758 node_ready.go:53] node "ha-286000-m02" has status "Ready":"False"
	I0816 10:03:25.476235    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:25.476252    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:25.476260    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:25.476277    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:25.478567    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:25.976418    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:25.976469    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:25.976477    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:25.976480    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:25.978730    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:26.476423    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:26.476439    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:26.476445    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:26.476448    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:26.478424    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:26.975919    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:26.975934    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:26.975940    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:26.975945    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:26.977809    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:27.475966    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:27.475981    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:27.475987    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:27.475990    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:27.477769    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:27.478121    3758 node_ready.go:53] node "ha-286000-m02" has status "Ready":"False"
	I0816 10:03:27.977123    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:27.977139    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:27.977146    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:27.977150    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:27.979266    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:28.477140    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:28.477226    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:28.477254    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:28.477264    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:28.480386    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:28.977368    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:28.977407    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:28.977420    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:28.977427    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:28.980479    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:29.477521    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:29.477545    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:29.477556    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:29.477565    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:29.481179    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:29.481510    3758 node_ready.go:53] node "ha-286000-m02" has status "Ready":"False"
	I0816 10:03:29.975906    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:29.975932    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:29.975943    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:29.975949    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:29.978633    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:30.477171    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:30.477189    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:30.477198    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:30.477201    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:30.480005    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:30.977947    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:30.977973    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:30.977993    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:30.977998    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:30.982219    3758 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 10:03:31.476252    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:31.476327    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:31.476341    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:31.476347    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:31.479417    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:31.975987    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:31.976012    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:31.976023    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:31.976029    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:31.979409    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:31.979880    3758 node_ready.go:53] node "ha-286000-m02" has status "Ready":"False"
	I0816 10:03:32.477180    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:32.477207    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:32.477229    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:32.477240    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:32.480686    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:32.976225    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:32.976250    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:32.976261    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:32.976269    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:32.979533    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:33.475956    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:33.475980    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:33.475991    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:33.475997    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:33.479025    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:33.977111    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:33.977185    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:33.977198    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:33.977204    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:33.980426    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:33.980922    3758 node_ready.go:53] node "ha-286000-m02" has status "Ready":"False"
	I0816 10:03:34.476455    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:34.476470    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:34.476477    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:34.476481    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:34.478517    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:34.976996    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:34.977037    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:34.977049    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:34.977084    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:34.980500    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:35.476045    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:35.476068    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:35.476079    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:35.476086    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:35.479058    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:35.975920    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:35.975944    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:35.975956    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:35.975962    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:35.979071    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:36.477845    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:36.477872    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:36.477885    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:36.477946    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:36.481401    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:36.481890    3758 node_ready.go:53] node "ha-286000-m02" has status "Ready":"False"
	I0816 10:03:36.977033    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:36.977057    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:36.977069    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:36.977076    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:36.980476    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:37.477095    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:37.477120    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:37.477133    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:37.477141    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:37.480485    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:37.976303    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:37.976320    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:37.976343    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:37.976348    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:37.978551    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:38.476492    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:38.476517    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.476528    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.476536    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:38.480190    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:38.976091    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:38.976114    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.976124    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.976129    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:38.979741    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:38.980051    3758 node_ready.go:49] node "ha-286000-m02" has status "Ready":"True"
	I0816 10:03:38.980066    3758 node_ready.go:38] duration metric: took 16.004516347s for node "ha-286000-m02" to be "Ready" ...
	I0816 10:03:38.980077    3758 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 10:03:38.980127    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0816 10:03:38.980135    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.980142    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.980152    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:38.982937    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:38.987546    3758 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-2kqjf" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:38.987591    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-2kqjf
	I0816 10:03:38.987597    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.987603    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.987607    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:38.989339    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:38.989748    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:38.989758    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.989764    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.989768    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:38.991324    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:38.991660    3758 pod_ready.go:93] pod "coredns-6f6b679f8f-2kqjf" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:38.991671    3758 pod_ready.go:82] duration metric: took 4.111381ms for pod "coredns-6f6b679f8f-2kqjf" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:38.991678    3758 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-rfbz7" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:38.991708    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-rfbz7
	I0816 10:03:38.991713    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.991718    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.991721    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:38.993245    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:38.993624    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:38.993631    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.993640    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.993644    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:38.995095    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:38.995419    3758 pod_ready.go:93] pod "coredns-6f6b679f8f-rfbz7" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:38.995427    3758 pod_ready.go:82] duration metric: took 3.744646ms for pod "coredns-6f6b679f8f-rfbz7" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:38.995433    3758 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:38.995467    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-286000
	I0816 10:03:38.995472    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.995478    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.995482    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:38.997007    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:38.997390    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:38.997397    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.997403    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.997406    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:38.998839    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:38.999151    3758 pod_ready.go:93] pod "etcd-ha-286000" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:38.999160    3758 pod_ready.go:82] duration metric: took 3.721879ms for pod "etcd-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:38.999165    3758 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:38.999202    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-286000-m02
	I0816 10:03:38.999207    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.999213    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.999217    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:39.001189    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:39.001547    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:39.001554    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:39.001559    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:39.001563    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:39.003004    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:39.003290    3758 pod_ready.go:93] pod "etcd-ha-286000-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:39.003298    3758 pod_ready.go:82] duration metric: took 4.127582ms for pod "etcd-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:39.003307    3758 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:39.177834    3758 request.go:632] Waited for 174.443582ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-286000
	I0816 10:03:39.177948    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-286000
	I0816 10:03:39.177963    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:39.177974    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:39.177981    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:39.181371    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:39.376438    3758 request.go:632] Waited for 194.538822ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:39.376562    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:39.376574    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:39.376586    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:39.376596    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:39.379989    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:39.380425    3758 pod_ready.go:93] pod "kube-apiserver-ha-286000" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:39.380437    3758 pod_ready.go:82] duration metric: took 377.131323ms for pod "kube-apiserver-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:39.380446    3758 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:39.576633    3758 request.go:632] Waited for 196.095353ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-286000-m02
	I0816 10:03:39.576687    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-286000-m02
	I0816 10:03:39.576696    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:39.576769    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:39.576793    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:39.580164    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:39.776642    3758 request.go:632] Waited for 195.66937ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:39.776725    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:39.776735    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:39.776746    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:39.776752    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:39.780273    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:39.780714    3758 pod_ready.go:93] pod "kube-apiserver-ha-286000-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:39.780723    3758 pod_ready.go:82] duration metric: took 400.274102ms for pod "kube-apiserver-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:39.780730    3758 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:39.977598    3758 request.go:632] Waited for 196.714211ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-286000
	I0816 10:03:39.977650    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-286000
	I0816 10:03:39.977659    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:39.977670    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:39.977679    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:39.981079    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:40.177460    3758 request.go:632] Waited for 195.94045ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:40.177528    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:40.177589    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:40.177603    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:40.177609    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:40.180971    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:40.181992    3758 pod_ready.go:93] pod "kube-controller-manager-ha-286000" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:40.182036    3758 pod_ready.go:82] duration metric: took 401.277819ms for pod "kube-controller-manager-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:40.182047    3758 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:40.376258    3758 request.go:632] Waited for 194.111468ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-286000-m02
	I0816 10:03:40.376327    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-286000-m02
	I0816 10:03:40.376336    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:40.376347    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:40.376355    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:40.379476    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:40.576506    3758 request.go:632] Waited for 196.409077ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:40.576552    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:40.576560    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:40.576571    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:40.576580    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:40.579964    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:40.580641    3758 pod_ready.go:93] pod "kube-controller-manager-ha-286000-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:40.580654    3758 pod_ready.go:82] duration metric: took 398.607786ms for pod "kube-controller-manager-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:40.580663    3758 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pt669" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:40.778194    3758 request.go:632] Waited for 197.469682ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pt669
	I0816 10:03:40.778324    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pt669
	I0816 10:03:40.778333    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:40.778345    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:40.778352    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:40.781925    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:40.976623    3758 request.go:632] Waited for 194.04689ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:40.976752    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:40.976761    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:40.976772    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:40.976780    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:40.980022    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:40.980473    3758 pod_ready.go:93] pod "kube-proxy-pt669" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:40.980485    3758 pod_ready.go:82] duration metric: took 399.822578ms for pod "kube-proxy-pt669" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:40.980494    3758 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-w4nt2" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:41.177361    3758 request.go:632] Waited for 196.811255ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w4nt2
	I0816 10:03:41.177533    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w4nt2
	I0816 10:03:41.177545    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:41.177556    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:41.177563    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:41.181543    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:41.377692    3758 request.go:632] Waited for 195.583238ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:41.377791    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:41.377802    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:41.377812    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:41.377819    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:41.381134    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:41.381716    3758 pod_ready.go:93] pod "kube-proxy-w4nt2" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:41.381726    3758 pod_ready.go:82] duration metric: took 401.234529ms for pod "kube-proxy-w4nt2" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:41.381733    3758 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:41.577020    3758 request.go:632] Waited for 195.246357ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-286000
	I0816 10:03:41.577073    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-286000
	I0816 10:03:41.577125    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:41.577138    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:41.577147    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:41.580051    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:41.777879    3758 request.go:632] Waited for 197.366145ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:41.777974    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:41.777983    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:41.777995    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:41.778003    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:41.781434    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:41.782012    3758 pod_ready.go:93] pod "kube-scheduler-ha-286000" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:41.782023    3758 pod_ready.go:82] duration metric: took 400.292624ms for pod "kube-scheduler-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:41.782051    3758 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:41.976356    3758 request.go:632] Waited for 194.23341ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-286000-m02
	I0816 10:03:41.976463    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-286000-m02
	I0816 10:03:41.976473    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:41.976485    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:41.976494    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:41.980088    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:42.176952    3758 request.go:632] Waited for 196.161737ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:42.177009    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:42.177018    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:42.177026    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:42.177083    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:42.182888    3758 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0816 10:03:42.183179    3758 pod_ready.go:93] pod "kube-scheduler-ha-286000-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:42.183188    3758 pod_ready.go:82] duration metric: took 401.133714ms for pod "kube-scheduler-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:42.183203    3758 pod_ready.go:39] duration metric: took 3.203173842s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 10:03:42.183221    3758 api_server.go:52] waiting for apiserver process to appear ...
	I0816 10:03:42.183274    3758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 10:03:42.195671    3758 api_server.go:72] duration metric: took 19.566910589s to wait for apiserver process to appear ...
	I0816 10:03:42.195682    3758 api_server.go:88] waiting for apiserver healthz status ...
	I0816 10:03:42.195698    3758 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0816 10:03:42.198837    3758 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0816 10:03:42.198870    3758 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0816 10:03:42.198874    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:42.198880    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:42.198884    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:42.199389    3758 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0816 10:03:42.199468    3758 api_server.go:141] control plane version: v1.31.0
	I0816 10:03:42.199478    3758 api_server.go:131] duration metric: took 3.791893ms to wait for apiserver health ...
	I0816 10:03:42.199486    3758 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 10:03:42.378048    3758 request.go:632] Waited for 178.500938ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0816 10:03:42.378156    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0816 10:03:42.378166    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:42.378178    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:42.378186    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:42.382776    3758 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 10:03:42.386068    3758 system_pods.go:59] 17 kube-system pods found
	I0816 10:03:42.386083    3758 system_pods.go:61] "coredns-6f6b679f8f-2kqjf" [be18cd09-3e3b-4749-bf29-7001a879f593] Running
	I0816 10:03:42.386087    3758 system_pods.go:61] "coredns-6f6b679f8f-rfbz7" [a593e27a-e38b-46c7-a603-44963c31c095] Running
	I0816 10:03:42.386090    3758 system_pods.go:61] "etcd-ha-286000" [c45bc623-befe-4e88-8da1-bb4f7d7be5b2] Running
	I0816 10:03:42.386093    3758 system_pods.go:61] "etcd-ha-286000-m02" [e14fbe0c-4527-4cbb-8c13-01c0b32a8d15] Running
	I0816 10:03:42.386096    3758 system_pods.go:61] "kindnet-t9kjf" [20c57f28-5e86-44a4-8a2a-07e1e240abf2] Running
	I0816 10:03:42.386099    3758 system_pods.go:61] "kindnet-whqxb" [1cc5291b-52ea-44b5-b87c-607d46b5281a] Running
	I0816 10:03:42.386102    3758 system_pods.go:61] "kube-apiserver-ha-286000" [c0d298a9-931b-4084-9e5f-01a88de27456] Running
	I0816 10:03:42.386104    3758 system_pods.go:61] "kube-apiserver-ha-286000-m02" [3666c131-2b88-483a-ab8c-b18cb8db7d6f] Running
	I0816 10:03:42.386120    3758 system_pods.go:61] "kube-controller-manager-ha-286000" [5a4f4c5d-d89a-4e5f-b022-479ca31f64cd] Running
	I0816 10:03:42.386127    3758 system_pods.go:61] "kube-controller-manager-ha-286000-m02" [a347c3d4-fa47-49ea-93a0-83087017a2bd] Running
	I0816 10:03:42.386130    3758 system_pods.go:61] "kube-proxy-pt669" [798c956f-8f08-465a-b0d9-90b65f703f80] Running
	I0816 10:03:42.386139    3758 system_pods.go:61] "kube-proxy-w4nt2" [79ce2248-f8fd-4a3b-aeb7-f81d7ad64564] Running
	I0816 10:03:42.386143    3758 system_pods.go:61] "kube-scheduler-ha-286000" [b4af80e0-cbb0-4257-a212-3585faff0875] Running
	I0816 10:03:42.386145    3758 system_pods.go:61] "kube-scheduler-ha-286000-m02" [89920e93-c959-47ec-8a2c-f2b63e8c3d3a] Running
	I0816 10:03:42.386153    3758 system_pods.go:61] "kube-vip-ha-286000" [b0f85be2-2afb-44b0-a701-161b24529349] Running
	I0816 10:03:42.386156    3758 system_pods.go:61] "kube-vip-ha-286000-m02" [4982dc53-946d-4f75-8e9e-452ef39ff2fa] Running
	I0816 10:03:42.386159    3758 system_pods.go:61] "storage-provisioner" [4805d53b-2db3-4092-a3f2-d4a854e93adc] Running
	I0816 10:03:42.386163    3758 system_pods.go:74] duration metric: took 186.675963ms to wait for pod list to return data ...
	I0816 10:03:42.386170    3758 default_sa.go:34] waiting for default service account to be created ...
	I0816 10:03:42.577030    3758 request.go:632] Waited for 190.795237ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0816 10:03:42.577183    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0816 10:03:42.577198    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:42.577209    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:42.577215    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:42.581020    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:42.581204    3758 default_sa.go:45] found service account: "default"
	I0816 10:03:42.581216    3758 default_sa.go:55] duration metric: took 195.045213ms for default service account to be created ...
	I0816 10:03:42.581224    3758 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 10:03:42.777160    3758 request.go:632] Waited for 195.848894ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0816 10:03:42.777252    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0816 10:03:42.777268    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:42.777285    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:42.777303    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:42.781637    3758 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 10:03:42.784962    3758 system_pods.go:86] 17 kube-system pods found
	I0816 10:03:42.784974    3758 system_pods.go:89] "coredns-6f6b679f8f-2kqjf" [be18cd09-3e3b-4749-bf29-7001a879f593] Running
	I0816 10:03:42.784978    3758 system_pods.go:89] "coredns-6f6b679f8f-rfbz7" [a593e27a-e38b-46c7-a603-44963c31c095] Running
	I0816 10:03:42.784981    3758 system_pods.go:89] "etcd-ha-286000" [c45bc623-befe-4e88-8da1-bb4f7d7be5b2] Running
	I0816 10:03:42.784984    3758 system_pods.go:89] "etcd-ha-286000-m02" [e14fbe0c-4527-4cbb-8c13-01c0b32a8d15] Running
	I0816 10:03:42.784987    3758 system_pods.go:89] "kindnet-t9kjf" [20c57f28-5e86-44a4-8a2a-07e1e240abf2] Running
	I0816 10:03:42.784990    3758 system_pods.go:89] "kindnet-whqxb" [1cc5291b-52ea-44b5-b87c-607d46b5281a] Running
	I0816 10:03:42.784992    3758 system_pods.go:89] "kube-apiserver-ha-286000" [c0d298a9-931b-4084-9e5f-01a88de27456] Running
	I0816 10:03:42.784995    3758 system_pods.go:89] "kube-apiserver-ha-286000-m02" [3666c131-2b88-483a-ab8c-b18cb8db7d6f] Running
	I0816 10:03:42.784997    3758 system_pods.go:89] "kube-controller-manager-ha-286000" [5a4f4c5d-d89a-4e5f-b022-479ca31f64cd] Running
	I0816 10:03:42.785001    3758 system_pods.go:89] "kube-controller-manager-ha-286000-m02" [a347c3d4-fa47-49ea-93a0-83087017a2bd] Running
	I0816 10:03:42.785003    3758 system_pods.go:89] "kube-proxy-pt669" [798c956f-8f08-465a-b0d9-90b65f703f80] Running
	I0816 10:03:42.785006    3758 system_pods.go:89] "kube-proxy-w4nt2" [79ce2248-f8fd-4a3b-aeb7-f81d7ad64564] Running
	I0816 10:03:42.785009    3758 system_pods.go:89] "kube-scheduler-ha-286000" [b4af80e0-cbb0-4257-a212-3585faff0875] Running
	I0816 10:03:42.785011    3758 system_pods.go:89] "kube-scheduler-ha-286000-m02" [89920e93-c959-47ec-8a2c-f2b63e8c3d3a] Running
	I0816 10:03:42.785015    3758 system_pods.go:89] "kube-vip-ha-286000" [b0f85be2-2afb-44b0-a701-161b24529349] Running
	I0816 10:03:42.785017    3758 system_pods.go:89] "kube-vip-ha-286000-m02" [4982dc53-946d-4f75-8e9e-452ef39ff2fa] Running
	I0816 10:03:42.785023    3758 system_pods.go:89] "storage-provisioner" [4805d53b-2db3-4092-a3f2-d4a854e93adc] Running
	I0816 10:03:42.785028    3758 system_pods.go:126] duration metric: took 203.80313ms to wait for k8s-apps to be running ...
	I0816 10:03:42.785034    3758 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 10:03:42.785088    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 10:03:42.796381    3758 system_svc.go:56] duration metric: took 11.343439ms WaitForService to wait for kubelet
	I0816 10:03:42.796397    3758 kubeadm.go:582] duration metric: took 20.16764951s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 10:03:42.796409    3758 node_conditions.go:102] verifying NodePressure condition ...
	I0816 10:03:42.977594    3758 request.go:632] Waited for 181.143005ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0816 10:03:42.977695    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0816 10:03:42.977706    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:42.977719    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:42.977724    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:42.981373    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:42.981988    3758 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 10:03:42.982013    3758 node_conditions.go:123] node cpu capacity is 2
	I0816 10:03:42.982039    3758 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 10:03:42.982043    3758 node_conditions.go:123] node cpu capacity is 2
	I0816 10:03:42.982046    3758 node_conditions.go:105] duration metric: took 185.637747ms to run NodePressure ...
	I0816 10:03:42.982055    3758 start.go:241] waiting for startup goroutines ...
	I0816 10:03:42.982079    3758 start.go:255] writing updated cluster config ...
	I0816 10:03:43.003740    3758 out.go:201] 
	I0816 10:03:43.024885    3758 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:03:43.024976    3758 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:03:43.047776    3758 out.go:177] * Starting "ha-286000-m03" control-plane node in "ha-286000" cluster
	I0816 10:03:43.137621    3758 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 10:03:43.137655    3758 cache.go:56] Caching tarball of preloaded images
	I0816 10:03:43.137861    3758 preload.go:172] Found /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0816 10:03:43.137884    3758 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 10:03:43.138022    3758 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:03:43.159475    3758 start.go:360] acquireMachinesLock for ha-286000-m03: {Name:mk0510f0432b1e24fd07d899155e85dff8756d5a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:03:43.159592    3758 start.go:364] duration metric: took 91.442µs to acquireMachinesLock for "ha-286000-m03"
	I0816 10:03:43.159625    3758 start.go:93] Provisioning new machine with config: &{Name:ha-286000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:ha-286000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ing
ress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:03:43.159736    3758 start.go:125] createHost starting for "m03" (driver="hyperkit")
	I0816 10:03:43.180569    3758 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0816 10:03:43.180719    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:03:43.180762    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:03:43.191087    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51098
	I0816 10:03:43.191558    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:03:43.191889    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:03:43.191899    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:03:43.192106    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:03:43.192302    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetMachineName
	I0816 10:03:43.192394    3758 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:03:43.192506    3758 start.go:159] libmachine.API.Create for "ha-286000" (driver="hyperkit")
	I0816 10:03:43.192526    3758 client.go:168] LocalClient.Create starting
	I0816 10:03:43.192559    3758 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem
	I0816 10:03:43.192606    3758 main.go:141] libmachine: Decoding PEM data...
	I0816 10:03:43.192620    3758 main.go:141] libmachine: Parsing certificate...
	I0816 10:03:43.192675    3758 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem
	I0816 10:03:43.192704    3758 main.go:141] libmachine: Decoding PEM data...
	I0816 10:03:43.192714    3758 main.go:141] libmachine: Parsing certificate...
	I0816 10:03:43.192726    3758 main.go:141] libmachine: Running pre-create checks...
	I0816 10:03:43.192731    3758 main.go:141] libmachine: (ha-286000-m03) Calling .PreCreateCheck
	I0816 10:03:43.192813    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:43.192833    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetConfigRaw
	I0816 10:03:43.202038    3758 main.go:141] libmachine: Creating machine...
	I0816 10:03:43.202062    3758 main.go:141] libmachine: (ha-286000-m03) Calling .Create
	I0816 10:03:43.202302    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:43.202574    3758 main.go:141] libmachine: (ha-286000-m03) DBG | I0816 10:03:43.202284    3848 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19461-1276/.minikube
	I0816 10:03:43.202705    3758 main.go:141] libmachine: (ha-286000-m03) Downloading /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19461-1276/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0816 10:03:43.529965    3758 main.go:141] libmachine: (ha-286000-m03) DBG | I0816 10:03:43.529902    3848 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/id_rsa...
	I0816 10:03:43.631057    3758 main.go:141] libmachine: (ha-286000-m03) DBG | I0816 10:03:43.630965    3848 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/ha-286000-m03.rawdisk...
	I0816 10:03:43.631074    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Writing magic tar header
	I0816 10:03:43.631083    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Writing SSH key tar header
	I0816 10:03:43.631711    3758 main.go:141] libmachine: (ha-286000-m03) DBG | I0816 10:03:43.631681    3848 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03 ...
	I0816 10:03:44.155270    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:44.155286    3758 main.go:141] libmachine: (ha-286000-m03) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/hyperkit.pid
	I0816 10:03:44.155318    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Using UUID 408efb18-ff91-428f-b414-6eb0d982a779
	I0816 10:03:44.181981    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Generated MAC 8a:e:de:5b:b5:8b
	I0816 10:03:44.182010    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000
	I0816 10:03:44.182059    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"408efb18-ff91-428f-b414-6eb0d982a779", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 10:03:44.182101    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"408efb18-ff91-428f-b414-6eb0d982a779", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 10:03:44.182228    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "408efb18-ff91-428f-b414-6eb0d982a779", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/ha-286000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/bzimage,/Users/jenkins/minikube-integration/19461-1276/.minikube/machine
s/ha-286000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000"}
	I0816 10:03:44.182306    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 408efb18-ff91-428f-b414-6eb0d982a779 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/ha-286000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/bzimage,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000"
	I0816 10:03:44.182332    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0816 10:03:44.185479    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 DEBUG: hyperkit: Pid is 3849
	I0816 10:03:44.185900    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Attempt 0
	I0816 10:03:44.185912    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:44.186025    3758 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid from json: 3849
	I0816 10:03:44.186994    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Searching for 8a:e:de:5b:b5:8b in /var/db/dhcpd_leases ...
	I0816 10:03:44.187062    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0816 10:03:44.187079    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0d7b0}
	I0816 10:03:44.187114    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:03:44.187142    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:03:44.187168    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:03:44.187185    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:03:44.193352    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0816 10:03:44.201781    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0816 10:03:44.202595    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:03:44.202610    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:03:44.202637    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:03:44.202653    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:03:44.588441    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0816 10:03:44.588458    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0816 10:03:44.703100    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:03:44.703117    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:03:44.703125    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:03:44.703135    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:03:44.703952    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0816 10:03:44.703963    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0816 10:03:46.188345    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Attempt 1
	I0816 10:03:46.188360    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:46.188480    3758 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid from json: 3849
	I0816 10:03:46.189255    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Searching for 8a:e:de:5b:b5:8b in /var/db/dhcpd_leases ...
	I0816 10:03:46.189306    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0816 10:03:46.189321    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0d7b0}
	I0816 10:03:46.189335    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:03:46.189352    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:03:46.189359    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:03:46.189385    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:03:48.190823    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Attempt 2
	I0816 10:03:48.190838    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:48.190916    3758 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid from json: 3849
	I0816 10:03:48.191692    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Searching for 8a:e:de:5b:b5:8b in /var/db/dhcpd_leases ...
	I0816 10:03:48.191747    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0816 10:03:48.191757    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0d7b0}
	I0816 10:03:48.191779    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:03:48.191787    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:03:48.191803    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:03:48.191815    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:03:50.191897    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Attempt 3
	I0816 10:03:50.191913    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:50.191977    3758 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid from json: 3849
	I0816 10:03:50.192744    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Searching for 8a:e:de:5b:b5:8b in /var/db/dhcpd_leases ...
	I0816 10:03:50.192793    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0816 10:03:50.192802    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0d7b0}
	I0816 10:03:50.192811    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:03:50.192820    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:03:50.192836    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:03:50.192850    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:03:50.310705    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:50 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0816 10:03:50.310770    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:50 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0816 10:03:50.310783    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:50 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0816 10:03:50.334054    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:50 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0816 10:03:52.193443    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Attempt 4
	I0816 10:03:52.193459    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:52.193535    3758 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid from json: 3849
	I0816 10:03:52.194319    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Searching for 8a:e:de:5b:b5:8b in /var/db/dhcpd_leases ...
	I0816 10:03:52.194367    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0816 10:03:52.194379    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0d7b0}
	I0816 10:03:52.194390    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:03:52.194399    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:03:52.194408    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:03:52.194419    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:03:54.195434    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Attempt 5
	I0816 10:03:54.195456    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:54.195540    3758 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid from json: 3849
	I0816 10:03:54.196406    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Searching for 8a:e:de:5b:b5:8b in /var/db/dhcpd_leases ...
	I0816 10:03:54.196510    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Found 6 entries in /var/db/dhcpd_leases!
	I0816 10:03:54.196530    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66c0d7f9}
	I0816 10:03:54.196551    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Found match: 8a:e:de:5b:b5:8b
	I0816 10:03:54.196553    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetConfigRaw
	I0816 10:03:54.196565    3758 main.go:141] libmachine: (ha-286000-m03) DBG | IP: 192.169.0.7
	I0816 10:03:54.197236    3758 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:03:54.197351    3758 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:03:54.197441    3758 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0816 10:03:54.197454    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetState
	I0816 10:03:54.197541    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:54.197611    3758 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid from json: 3849
	I0816 10:03:54.198439    3758 main.go:141] libmachine: Detecting operating system of created instance...
	I0816 10:03:54.198446    3758 main.go:141] libmachine: Waiting for SSH to be available...
	I0816 10:03:54.198450    3758 main.go:141] libmachine: Getting to WaitForSSH function...
	I0816 10:03:54.198454    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:54.198532    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:54.198612    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:54.198695    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:54.198783    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:54.198892    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:03:54.199063    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:03:54.199071    3758 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0816 10:03:55.266165    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 10:03:55.266179    3758 main.go:141] libmachine: Detecting the provisioner...
	I0816 10:03:55.266185    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:55.266318    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:55.266413    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.266507    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.266601    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:55.266731    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:03:55.266879    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:03:55.266886    3758 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0816 10:03:55.332778    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0816 10:03:55.332822    3758 main.go:141] libmachine: found compatible host: buildroot
	I0816 10:03:55.332828    3758 main.go:141] libmachine: Provisioning with buildroot...
	I0816 10:03:55.332834    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetMachineName
	I0816 10:03:55.332971    3758 buildroot.go:166] provisioning hostname "ha-286000-m03"
	I0816 10:03:55.332982    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetMachineName
	I0816 10:03:55.333079    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:55.333186    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:55.333277    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.333375    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.333463    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:55.333593    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:03:55.333737    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:03:55.333746    3758 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-286000-m03 && echo "ha-286000-m03" | sudo tee /etc/hostname
	I0816 10:03:55.411701    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-286000-m03
	
	I0816 10:03:55.411715    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:55.411851    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:55.411965    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.412057    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.412140    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:55.412274    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:03:55.412418    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:03:55.412429    3758 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-286000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-286000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-286000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 10:03:55.485102    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 10:03:55.485118    3758 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19461-1276/.minikube CaCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19461-1276/.minikube}
	I0816 10:03:55.485127    3758 buildroot.go:174] setting up certificates
	I0816 10:03:55.485134    3758 provision.go:84] configureAuth start
	I0816 10:03:55.485141    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetMachineName
	I0816 10:03:55.485284    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetIP
	I0816 10:03:55.485375    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:55.485445    3758 provision.go:143] copyHostCerts
	I0816 10:03:55.485474    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem
	I0816 10:03:55.485527    3758 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem, removing ...
	I0816 10:03:55.485533    3758 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem
	I0816 10:03:55.485654    3758 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem (1123 bytes)
	I0816 10:03:55.485864    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem
	I0816 10:03:55.485895    3758 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem, removing ...
	I0816 10:03:55.485900    3758 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem
	I0816 10:03:55.486003    3758 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem (1679 bytes)
	I0816 10:03:55.486172    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem
	I0816 10:03:55.486206    3758 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem, removing ...
	I0816 10:03:55.486215    3758 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem
	I0816 10:03:55.486286    3758 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem (1078 bytes)
	I0816 10:03:55.486435    3758 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem org=jenkins.ha-286000-m03 san=[127.0.0.1 192.169.0.7 ha-286000-m03 localhost minikube]
	I0816 10:03:55.572803    3758 provision.go:177] copyRemoteCerts
	I0816 10:03:55.572855    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 10:03:55.572869    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:55.573015    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:55.573114    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.573208    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:55.573304    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/id_rsa Username:docker}
	I0816 10:03:55.612104    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0816 10:03:55.612186    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0816 10:03:55.632484    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0816 10:03:55.632548    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 10:03:55.652677    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0816 10:03:55.652752    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0816 10:03:55.672555    3758 provision.go:87] duration metric: took 187.4165ms to configureAuth
	I0816 10:03:55.672568    3758 buildroot.go:189] setting minikube options for container-runtime
	I0816 10:03:55.672735    3758 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:03:55.672748    3758 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:03:55.672889    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:55.672982    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:55.673071    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.673157    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.673245    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:55.673371    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:03:55.673496    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:03:55.673504    3758 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0816 10:03:55.738053    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0816 10:03:55.738064    3758 buildroot.go:70] root file system type: tmpfs
	I0816 10:03:55.738156    3758 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0816 10:03:55.738167    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:55.738306    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:55.738397    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.738489    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.738573    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:55.738694    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:03:55.738841    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:03:55.738890    3758 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0816 10:03:55.813774    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0816 10:03:55.813790    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:55.813924    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:55.814019    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.814103    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.814193    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:55.814320    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:03:55.814470    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:03:55.814484    3758 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0816 10:03:57.356529    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0816 10:03:57.356544    3758 main.go:141] libmachine: Checking connection to Docker...
	I0816 10:03:57.356549    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetURL
	I0816 10:03:57.356692    3758 main.go:141] libmachine: Docker is up and running!
	I0816 10:03:57.356700    3758 main.go:141] libmachine: Reticulating splines...
	I0816 10:03:57.356705    3758 client.go:171] duration metric: took 14.164443579s to LocalClient.Create
	I0816 10:03:57.356717    3758 start.go:167] duration metric: took 14.164482312s to libmachine.API.Create "ha-286000"
	I0816 10:03:57.356727    3758 start.go:293] postStartSetup for "ha-286000-m03" (driver="hyperkit")
	I0816 10:03:57.356734    3758 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 10:03:57.356744    3758 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:03:57.356919    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 10:03:57.356935    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:57.357030    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:57.357130    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:57.357273    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:57.357390    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/id_rsa Username:docker}
	I0816 10:03:57.398231    3758 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 10:03:57.402472    3758 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 10:03:57.402483    3758 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19461-1276/.minikube/addons for local assets ...
	I0816 10:03:57.402571    3758 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19461-1276/.minikube/files for local assets ...
	I0816 10:03:57.402723    3758 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> 18312.pem in /etc/ssl/certs
	I0816 10:03:57.402729    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> /etc/ssl/certs/18312.pem
	I0816 10:03:57.402895    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 10:03:57.412226    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem --> /etc/ssl/certs/18312.pem (1708 bytes)
	I0816 10:03:57.443242    3758 start.go:296] duration metric: took 86.507779ms for postStartSetup
	I0816 10:03:57.443268    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetConfigRaw
	I0816 10:03:57.443871    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetIP
	I0816 10:03:57.444031    3758 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:03:57.444386    3758 start.go:128] duration metric: took 14.284913269s to createHost
	I0816 10:03:57.444401    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:57.444490    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:57.444568    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:57.444658    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:57.444732    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:57.444842    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:03:57.444963    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:03:57.444971    3758 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 10:03:57.509634    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723827837.586061636
	
	I0816 10:03:57.509648    3758 fix.go:216] guest clock: 1723827837.586061636
	I0816 10:03:57.509653    3758 fix.go:229] Guest: 2024-08-16 10:03:57.586061636 -0700 PDT Remote: 2024-08-16 10:03:57.444395 -0700 PDT m=+129.370490857 (delta=141.666636ms)
	I0816 10:03:57.509665    3758 fix.go:200] guest clock delta is within tolerance: 141.666636ms
	I0816 10:03:57.509669    3758 start.go:83] releasing machines lock for "ha-286000-m03", held for 14.350339851s
	I0816 10:03:57.509691    3758 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:03:57.509832    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetIP
	I0816 10:03:57.542277    3758 out.go:177] * Found network options:
	I0816 10:03:57.562266    3758 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6
	W0816 10:03:57.583040    3758 proxy.go:119] fail to check proxy env: Error ip not in block
	W0816 10:03:57.583073    3758 proxy.go:119] fail to check proxy env: Error ip not in block
	I0816 10:03:57.583092    3758 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:03:57.583964    3758 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:03:57.584310    3758 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:03:57.584469    3758 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 10:03:57.584522    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	W0816 10:03:57.584570    3758 proxy.go:119] fail to check proxy env: Error ip not in block
	W0816 10:03:57.584596    3758 proxy.go:119] fail to check proxy env: Error ip not in block
	I0816 10:03:57.584708    3758 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0816 10:03:57.584735    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:57.584813    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:57.584995    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:57.585023    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:57.585164    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:57.585195    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:57.585338    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/id_rsa Username:docker}
	I0816 10:03:57.585406    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:57.585566    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/id_rsa Username:docker}
	W0816 10:03:57.623481    3758 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 10:03:57.623541    3758 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 10:03:57.670278    3758 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 10:03:57.670298    3758 start.go:495] detecting cgroup driver to use...
	I0816 10:03:57.670408    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 10:03:57.686134    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0816 10:03:57.694681    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0816 10:03:57.703134    3758 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0816 10:03:57.703198    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0816 10:03:57.711736    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 10:03:57.720041    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0816 10:03:57.728421    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 10:03:57.737252    3758 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 10:03:57.746356    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0816 10:03:57.755117    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0816 10:03:57.763962    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0816 10:03:57.772639    3758 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 10:03:57.780684    3758 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 10:03:57.788938    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:03:57.893932    3758 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0816 10:03:57.913150    3758 start.go:495] detecting cgroup driver to use...
	I0816 10:03:57.913224    3758 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0816 10:03:57.932274    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 10:03:57.945000    3758 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 10:03:57.965222    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 10:03:57.977460    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 10:03:57.988259    3758 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0816 10:03:58.006552    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 10:03:58.017261    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 10:03:58.032545    3758 ssh_runner.go:195] Run: which cri-dockerd
	I0816 10:03:58.035464    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0816 10:03:58.042742    3758 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0816 10:03:58.056747    3758 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0816 10:03:58.163471    3758 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0816 10:03:58.281020    3758 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0816 10:03:58.281044    3758 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0816 10:03:58.294992    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:03:58.399122    3758 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0816 10:04:59.415916    3758 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.017935847s)
	I0816 10:04:59.415981    3758 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0816 10:04:59.453653    3758 out.go:201] 
	W0816 10:04:59.474040    3758 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 16 17:03:56 ha-286000-m03 systemd[1]: Starting Docker Application Container Engine...
	Aug 16 17:03:56 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:56.200976866Z" level=info msg="Starting up"
	Aug 16 17:03:56 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:56.201455258Z" level=info msg="containerd not running, starting managed containerd"
	Aug 16 17:03:56 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:56.201908035Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=519
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.218394236Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.233514110Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.233584256Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.233654513Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.233690141Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.233768300Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.233806284Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.233955562Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.233996222Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.234026670Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.234054929Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.234137090Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.234382642Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.235934793Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.235985918Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.236117022Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.236159609Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.236256962Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.236327154Z" level=info msg="metadata content store policy set" policy=shared
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.238422470Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.238509436Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.238555940Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.238594343Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.238633954Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.238734892Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.238944917Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239083528Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239123172Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239153894Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239184820Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239214617Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239248728Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239285992Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239333696Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239368624Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239411950Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239451554Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239490333Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239523121Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239554681Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239589296Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239621390Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239653930Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239683266Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239712579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239742056Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239772828Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239801901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239830636Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239859570Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239893189Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239929555Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239960582Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239991787Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240076099Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240121809Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240153088Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240182438Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240210918Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240240067Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240268586Z" level=info msg="NRI interface is disabled by configuration."
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240520719Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240609661Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240710485Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240795617Z" level=info msg="containerd successfully booted in 0.023088s"
	Aug 16 17:03:57 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:57.221812848Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 16 17:03:57 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:57.228854621Z" level=info msg="Loading containers: start."
	Aug 16 17:03:57 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:57.312023793Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 16 17:03:57 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:57.390459780Z" level=info msg="Loading containers: done."
	Aug 16 17:03:57 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:57.404746652Z" level=info msg="Docker daemon" commit=f9522e5 containerd-snapshotter=false storage-driver=overlay2 version=27.1.2
	Aug 16 17:03:57 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:57.404864935Z" level=info msg="Daemon has completed initialization"
	Aug 16 17:03:57 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:57.430646618Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 16 17:03:57 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:57.430810646Z" level=info msg="API listen on [::]:2376"
	Aug 16 17:03:57 ha-286000-m03 systemd[1]: Started Docker Application Container Engine.
	Aug 16 17:03:58 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:58.488220464Z" level=info msg="Processing signal 'terminated'"
	Aug 16 17:03:58 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:58.489144852Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 16 17:03:58 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:58.489473398Z" level=info msg="Daemon shutdown complete"
	Aug 16 17:03:58 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:58.489515121Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 16 17:03:58 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:58.489527809Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 16 17:03:58 ha-286000-m03 systemd[1]: Stopping Docker Application Container Engine...
	Aug 16 17:03:59 ha-286000-m03 systemd[1]: docker.service: Deactivated successfully.
	Aug 16 17:03:59 ha-286000-m03 systemd[1]: Stopped Docker Application Container Engine.
	Aug 16 17:03:59 ha-286000-m03 systemd[1]: Starting Docker Application Container Engine...
	Aug 16 17:03:59 ha-286000-m03 dockerd[913]: time="2024-08-16T17:03:59.520198872Z" level=info msg="Starting up"
	Aug 16 17:04:59 ha-286000-m03 dockerd[913]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 16 17:04:59 ha-286000-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 16 17:04:59 ha-286000-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 16 17:04:59 ha-286000-m03 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0816 10:04:59.474111    3758 out.go:270] * 
	W0816 10:04:59.474938    3758 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 10:04:59.558192    3758 out.go:201] 
	
	
	==> Docker <==
	Aug 16 17:18:06 ha-286000 dockerd[1241]: time="2024-08-16T17:18:06.825488106Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:18:07 ha-286000 dockerd[1235]: time="2024-08-16T17:18:07.380914614Z" level=info msg="ignoring event" container=f55b59f53c6eb976c8fd19fc0412bef109f9fc2505622d0a8ec85ff7a5968741 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 16 17:18:07 ha-286000 dockerd[1241]: time="2024-08-16T17:18:07.381086772Z" level=info msg="shim disconnected" id=f55b59f53c6eb976c8fd19fc0412bef109f9fc2505622d0a8ec85ff7a5968741 namespace=moby
	Aug 16 17:18:07 ha-286000 dockerd[1241]: time="2024-08-16T17:18:07.381165916Z" level=warning msg="cleaning up after shim disconnected" id=f55b59f53c6eb976c8fd19fc0412bef109f9fc2505622d0a8ec85ff7a5968741 namespace=moby
	Aug 16 17:18:07 ha-286000 dockerd[1241]: time="2024-08-16T17:18:07.381174899Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 16 17:18:07 ha-286000 dockerd[1241]: time="2024-08-16T17:18:07.835148340Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 16 17:18:07 ha-286000 dockerd[1241]: time="2024-08-16T17:18:07.835268034Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 16 17:18:07 ha-286000 dockerd[1241]: time="2024-08-16T17:18:07.835304228Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:18:07 ha-286000 dockerd[1241]: time="2024-08-16T17:18:07.835440471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:18:38 ha-286000 dockerd[1235]: time="2024-08-16T17:18:38.182671263Z" level=info msg="ignoring event" container=078fa65ce0cbbaee7bfe7a37ccce2f4babca06b52980d2e1de0c1e09137a15d3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 16 17:18:38 ha-286000 dockerd[1241]: time="2024-08-16T17:18:38.183075145Z" level=info msg="shim disconnected" id=078fa65ce0cbbaee7bfe7a37ccce2f4babca06b52980d2e1de0c1e09137a15d3 namespace=moby
	Aug 16 17:18:38 ha-286000 dockerd[1241]: time="2024-08-16T17:18:38.183173282Z" level=warning msg="cleaning up after shim disconnected" id=078fa65ce0cbbaee7bfe7a37ccce2f4babca06b52980d2e1de0c1e09137a15d3 namespace=moby
	Aug 16 17:18:38 ha-286000 dockerd[1241]: time="2024-08-16T17:18:38.183184275Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 16 17:18:51 ha-286000 dockerd[1241]: time="2024-08-16T17:18:51.672885883Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 16 17:18:51 ha-286000 dockerd[1241]: time="2024-08-16T17:18:51.672940010Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 16 17:18:51 ha-286000 dockerd[1241]: time="2024-08-16T17:18:51.672951795Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:18:51 ha-286000 dockerd[1241]: time="2024-08-16T17:18:51.673002593Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:19:33 ha-286000 dockerd[1241]: time="2024-08-16T17:19:33.103871745Z" level=info msg="shim disconnected" id=5f3adc202a7a29f6d14226262a9af4c9c240c90bceb008825d57f9a314211b16 namespace=moby
	Aug 16 17:19:33 ha-286000 dockerd[1241]: time="2024-08-16T17:19:33.103921329Z" level=warning msg="cleaning up after shim disconnected" id=5f3adc202a7a29f6d14226262a9af4c9c240c90bceb008825d57f9a314211b16 namespace=moby
	Aug 16 17:19:33 ha-286000 dockerd[1241]: time="2024-08-16T17:19:33.103930473Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 16 17:19:33 ha-286000 dockerd[1235]: time="2024-08-16T17:19:33.104077034Z" level=info msg="ignoring event" container=5f3adc202a7a29f6d14226262a9af4c9c240c90bceb008825d57f9a314211b16 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 16 17:19:33 ha-286000 dockerd[1241]: time="2024-08-16T17:19:33.188246206Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 16 17:19:33 ha-286000 dockerd[1241]: time="2024-08-16T17:19:33.188343557Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 16 17:19:33 ha-286000 dockerd[1241]: time="2024-08-16T17:19:33.188355548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:19:33 ha-286000 dockerd[1241]: time="2024-08-16T17:19:33.188638980Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	8d7a6d0f95379       604f5db92eaa8                                                                                         11 seconds ago       Running             kube-apiserver            1                   818ee6dafe6c9       kube-apiserver-ha-286000
	0529825d87ca5       6e38f40d628db                                                                                         53 seconds ago       Running             storage-provisioner       2                   482990a4b00e6       storage-provisioner
	078fa65ce0cbb       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   482990a4b00e6       storage-provisioner
	a5da1871a366d       38af8ddebf499                                                                                         About a minute ago   Running             kube-vip                  1                   69fba128b04a6       kube-vip-ha-286000
	5bbe7a8cc492e       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   14 minutes ago       Running             busybox                   0                   1873ade92edb9       busybox-7dff88458-dvmvk
	bcd7170b050a5       cbb01a7bd410d                                                                                         16 minutes ago       Running             coredns                   0                   452942e267927       coredns-6f6b679f8f-rfbz7
	60d3d03e297ce       cbb01a7bd410d                                                                                         16 minutes ago       Running             coredns                   0                   fbd84fb813c90       coredns-6f6b679f8f-2kqjf
	2677394dcd66f       kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166              17 minutes ago       Running             kindnet-cni               0                   afaf657e85c32       kindnet-whqxb
	81f6c96d46494       ad83b2ca7b09e                                                                                         17 minutes ago       Running             kube-proxy                0                   dfb822fc27b5e       kube-proxy-w4nt2
	cafa34c562392       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     17 minutes ago       Exited              kube-vip                  0                   69fba128b04a6       kube-vip-ha-286000
	8f5867ee99d9b       045733566833c                                                                                         17 minutes ago       Running             kube-controller-manager   0                   e87fd3e77384f       kube-controller-manager-ha-286000
	f7b2e9efdd94f       1766f54c897f0                                                                                         17 minutes ago       Running             kube-scheduler            0                   8e2b186980aff       kube-scheduler-ha-286000
	d8dadff6cec78       2e96e5913fc06                                                                                         17 minutes ago       Running             etcd                      0                   cdb14ff7d8896       etcd-ha-286000
	5f3adc202a7a2       604f5db92eaa8                                                                                         17 minutes ago       Exited              kube-apiserver            0                   818ee6dafe6c9       kube-apiserver-ha-286000
	
	
	==> coredns [60d3d03e297c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:37081 - 62351 "HINFO IN 7437422972060865489.3041931607585121070. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.009759141s
	[INFO] 10.244.0.4:34542 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.00094119s
	[INFO] 10.244.0.4:38912 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 0.000921826s
	[INFO] 10.244.1.2:39585 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000070042s
	[INFO] 10.244.0.4:39673 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000101817s
	[INFO] 10.244.0.4:55820 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000225412s
	[INFO] 10.244.1.2:48427 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135793s
	[INFO] 10.244.1.2:33204 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.000791531s
	[INFO] 10.244.1.2:51238 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000081236s
	[INFO] 10.244.1.2:42705 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000135743s
	[INFO] 10.244.1.2:33254 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000106295s
	[INFO] 10.244.0.4:53900 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000067988s
	[INFO] 10.244.1.2:48994 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00006771s
	[INFO] 10.244.1.2:56734 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000138883s
	[INFO] 10.244.0.4:53039 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000056881s
	[INFO] 10.244.0.4:47474 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000052458s
	[INFO] 10.244.1.2:35027 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000113257s
	[INFO] 10.244.1.2:60680 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000087182s
	[INFO] 10.244.1.2:36287 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000080566s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=2678&timeout=9m48s&timeoutSeconds=588&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=2641&timeout=8m59s&timeoutSeconds=539&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	
	
	==> coredns [bcd7170b050a] <==
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:47813 - 35687 "HINFO IN 8431179596625010309.1478595720938230641. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.009077429s
	[INFO] 10.244.0.4:47786 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132847s
	[INFO] 10.244.0.4:40096 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.001354873s
	[INFO] 10.244.1.2:40884 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000096214s
	[INFO] 10.244.1.2:55655 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000050276s
	[INFO] 10.244.1.2:47690 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000614458s
	[INFO] 10.244.0.4:45344 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000076791s
	[INFO] 10.244.0.4:53101 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.001042655s
	[INFO] 10.244.0.4:43889 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000082483s
	[INFO] 10.244.0.4:59210 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.00074308s
	[INFO] 10.244.0.4:38429 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00010233s
	[INFO] 10.244.0.4:48679 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00007214s
	[INFO] 10.244.1.2:43879 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000062278s
	[INFO] 10.244.1.2:45902 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000100209s
	[INFO] 10.244.1.2:39740 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000071593s
	[INFO] 10.244.0.4:50878 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000077864s
	[INFO] 10.244.0.4:51260 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000078903s
	[INFO] 10.244.0.4:38206 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000049117s
	[INFO] 10.244.1.2:54952 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000071797s
	[INFO] 10.244.1.2:36478 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000063757s
	[INFO] 10.244.0.4:43240 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000066616s
	[INFO] 10.244.0.4:60894 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000052873s
	[INFO] 10.244.1.2:43932 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00008816s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=2678&timeout=6m23s&timeoutSeconds=383&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Error from server: etcdserver: request timed out
	
	
	==> dmesg <==
	[  +2.760553] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +1.351722] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000003] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Aug16 17:02] systemd-fstab-generator[519]: Ignoring "noauto" option for root device
	[  +0.114039] systemd-fstab-generator[531]: Ignoring "noauto" option for root device
	[  +1.207693] kauditd_printk_skb: 42 callbacks suppressed
	[  +0.611653] systemd-fstab-generator[786]: Ignoring "noauto" option for root device
	[  +0.265175] systemd-fstab-generator[846]: Ignoring "noauto" option for root device
	[  +0.101006] systemd-fstab-generator[858]: Ignoring "noauto" option for root device
	[  +0.123308] systemd-fstab-generator[872]: Ignoring "noauto" option for root device
	[  +2.497195] systemd-fstab-generator[1086]: Ignoring "noauto" option for root device
	[  +0.110299] systemd-fstab-generator[1098]: Ignoring "noauto" option for root device
	[  +0.095670] systemd-fstab-generator[1110]: Ignoring "noauto" option for root device
	[  +0.122299] systemd-fstab-generator[1126]: Ignoring "noauto" option for root device
	[  +3.636333] systemd-fstab-generator[1227]: Ignoring "noauto" option for root device
	[  +0.056190] kauditd_printk_skb: 233 callbacks suppressed
	[  +2.505796] systemd-fstab-generator[1479]: Ignoring "noauto" option for root device
	[  +3.634449] systemd-fstab-generator[1611]: Ignoring "noauto" option for root device
	[  +0.055756] kauditd_printk_skb: 70 callbacks suppressed
	[  +6.910973] systemd-fstab-generator[2107]: Ignoring "noauto" option for root device
	[  +0.079351] kauditd_printk_skb: 72 callbacks suppressed
	[  +9.264773] kauditd_printk_skb: 51 callbacks suppressed
	[Aug16 17:03] kauditd_printk_skb: 26 callbacks suppressed
	[Aug16 17:18] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [d8dadff6cec7] <==
	{"level":"warn","ts":"2024-08-16T17:19:55.643522Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"13.056094813s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/runtimeclasses/\" range_end:\"/registry/runtimeclasses0\" limit:500 ","response":"","error":"etcdserver: request timed out"}
	{"level":"warn","ts":"2024-08-16T17:19:55.643529Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"13.099041634s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/\" range_end:\"/registry/configmaps/kube-system0\" limit:500 ","response":"","error":"etcdserver: request timed out"}
	{"level":"warn","ts":"2024-08-16T17:19:55.643536Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"13.134394188s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/storageclasses/\" range_end:\"/registry/storageclasses0\" limit:500 ","response":"","error":"etcdserver: request timed out"}
	{"level":"warn","ts":"2024-08-16T17:19:55.643542Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"13.135301178s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/\" range_end:\"/registry/services/specs0\" limit:500 ","response":"","error":"etcdserver: request timed out"}
	{"level":"warn","ts":"2024-08-16T17:19:55.643548Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"13.146866933s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/roles/\" range_end:\"/registry/roles0\" limit:500 ","response":"","error":"etcdserver: request timed out"}
	{"level":"warn","ts":"2024-08-16T17:19:55.643554Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"13.157285587s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumes/\" range_end:\"/registry/persistentvolumes0\" limit:500 ","response":"","error":"etcdserver: request timed out"}
	{"level":"warn","ts":"2024-08-16T17:19:55.643587Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"13.764331919s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/apiserver-nxwuvbfn6oczjrrqlvkf5enpgi\" ","response":"","error":"etcdserver: request timed out"}
	{"level":"info","ts":"2024-08-16T17:19:55.655680Z","caller":"traceutil/trace.go:171","msg":"trace[2070135541] range","detail":"{range_begin:/registry/leases/kube-system/apiserver-nxwuvbfn6oczjrrqlvkf5enpgi; range_end:; }","duration":"13.77642328s","start":"2024-08-16T17:19:41.879251Z","end":"2024-08-16T17:19:55.655674Z","steps":["trace[2070135541] 'agreement among raft nodes before linearized reading'  (duration: 13.764332029s)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T17:19:55.655730Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T17:19:41.879229Z","time spent":"13.776493829s","remote":"127.0.0.1:59130","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":0,"response size":0,"request content":"key:\"/registry/leases/kube-system/apiserver-nxwuvbfn6oczjrrqlvkf5enpgi\" "}
	{"level":"info","ts":"2024-08-16T17:19:55.655771Z","caller":"traceutil/trace.go:171","msg":"trace[1251413878] range","detail":"{range_begin:/registry/replicasets/; range_end:/registry/replicasets0; }","duration":"12.984204163s","start":"2024-08-16T17:19:42.671563Z","end":"2024-08-16T17:19:55.655767Z","steps":["trace[1251413878] 'agreement among raft nodes before linearized reading'  (duration: 12.97168777s)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T17:19:55.655789Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T17:19:42.671554Z","time spent":"12.984228074s","remote":"127.0.0.1:59334","response type":"/etcdserverpb.KV/Range","request count":0,"request size":51,"response count":0,"response size":0,"request content":"key:\"/registry/replicasets/\" range_end:\"/registry/replicasets0\" limit:10000 "}
	{"level":"info","ts":"2024-08-16T17:19:55.655804Z","caller":"traceutil/trace.go:171","msg":"trace[348668695] range","detail":"{range_begin:/registry/limitranges/; range_end:/registry/limitranges0; }","duration":"13.024830491s","start":"2024-08-16T17:19:42.630969Z","end":"2024-08-16T17:19:55.655799Z","steps":["trace[348668695] 'agreement among raft nodes before linearized reading'  (duration: 13.01253677s)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T17:19:55.655865Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T17:19:42.630811Z","time spent":"13.025047432s","remote":"127.0.0.1:59026","response type":"/etcdserverpb.KV/Range","request count":0,"request size":51,"response count":0,"response size":0,"request content":"key:\"/registry/limitranges/\" range_end:\"/registry/limitranges0\" limit:500 "}
	{"level":"info","ts":"2024-08-16T17:19:55.655883Z","caller":"traceutil/trace.go:171","msg":"trace[9333085] range","detail":"{range_begin:/registry/runtimeclasses/; range_end:/registry/runtimeclasses0; }","duration":"13.068455453s","start":"2024-08-16T17:19:42.587424Z","end":"2024-08-16T17:19:55.655880Z","steps":["trace[9333085] 'agreement among raft nodes before linearized reading'  (duration: 13.056095269s)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T17:19:55.655892Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T17:19:42.587342Z","time spent":"13.068546782s","remote":"127.0.0.1:59160","response type":"/etcdserverpb.KV/Range","request count":0,"request size":57,"response count":0,"response size":0,"request content":"key:\"/registry/runtimeclasses/\" range_end:\"/registry/runtimeclasses0\" limit:500 "}
	{"level":"info","ts":"2024-08-16T17:19:55.655970Z","caller":"traceutil/trace.go:171","msg":"trace[1201690917] range","detail":"{range_begin:/registry/configmaps/kube-system/; range_end:/registry/configmaps/kube-system0; }","duration":"13.1114482s","start":"2024-08-16T17:19:42.544485Z","end":"2024-08-16T17:19:55.655933Z","steps":["trace[1201690917] 'agreement among raft nodes before linearized reading'  (duration: 13.099041823s)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T17:19:55.656007Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T17:19:42.544460Z","time spent":"13.111541285s","remote":"127.0.0.1:59002","response type":"/etcdserverpb.KV/Range","request count":0,"request size":73,"response count":0,"response size":0,"request content":"key:\"/registry/configmaps/kube-system/\" range_end:\"/registry/configmaps/kube-system0\" limit:500 "}
	{"level":"info","ts":"2024-08-16T17:19:55.656026Z","caller":"traceutil/trace.go:171","msg":"trace[649826130] range","detail":"{range_begin:/registry/storageclasses/; range_end:/registry/storageclasses0; }","duration":"13.14688285s","start":"2024-08-16T17:19:42.509139Z","end":"2024-08-16T17:19:55.656022Z","steps":["trace[649826130] 'agreement among raft nodes before linearized reading'  (duration: 13.134394369s)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T17:19:55.656057Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T17:19:42.509080Z","time spent":"13.146972231s","remote":"127.0.0.1:59232","response type":"/etcdserverpb.KV/Range","request count":0,"request size":57,"response count":0,"response size":0,"request content":"key:\"/registry/storageclasses/\" range_end:\"/registry/storageclasses0\" limit:500 "}
	{"level":"info","ts":"2024-08-16T17:19:55.656106Z","caller":"traceutil/trace.go:171","msg":"trace[1575828216] range","detail":"{range_begin:/registry/services/specs/; range_end:/registry/services/specs0; }","duration":"13.147863897s","start":"2024-08-16T17:19:42.508238Z","end":"2024-08-16T17:19:55.656102Z","steps":["trace[1575828216] 'agreement among raft nodes before linearized reading'  (duration: 13.135301453s)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T17:19:55.656171Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T17:19:42.508211Z","time spent":"13.147953399s","remote":"127.0.0.1:59074","response type":"/etcdserverpb.KV/Range","request count":0,"request size":57,"response count":0,"response size":0,"request content":"key:\"/registry/services/specs/\" range_end:\"/registry/services/specs0\" limit:500 "}
	{"level":"info","ts":"2024-08-16T17:19:55.656188Z","caller":"traceutil/trace.go:171","msg":"trace[396545373] range","detail":"{range_begin:/registry/roles/; range_end:/registry/roles0; }","duration":"13.159506142s","start":"2024-08-16T17:19:42.496679Z","end":"2024-08-16T17:19:55.656185Z","steps":["trace[396545373] 'agreement among raft nodes before linearized reading'  (duration: 13.146867331s)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T17:19:55.656219Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T17:19:42.496650Z","time spent":"13.159563431s","remote":"127.0.0.1:59180","response type":"/etcdserverpb.KV/Range","request count":0,"request size":39,"response count":0,"response size":0,"request content":"key:\"/registry/roles/\" range_end:\"/registry/roles0\" limit:500 "}
	{"level":"info","ts":"2024-08-16T17:19:55.656234Z","caller":"traceutil/trace.go:171","msg":"trace[2032376396] range","detail":"{range_begin:/registry/persistentvolumes/; range_end:/registry/persistentvolumes0; }","duration":"13.169965829s","start":"2024-08-16T17:19:42.486266Z","end":"2024-08-16T17:19:55.656232Z","steps":["trace[2032376396] 'agreement among raft nodes before linearized reading'  (duration: 13.157286079s)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T17:19:55.656298Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T17:19:42.486237Z","time spent":"13.170054913s","remote":"127.0.0.1:59036","response type":"/etcdserverpb.KV/Range","request count":0,"request size":63,"response count":0,"response size":0,"request content":"key:\"/registry/persistentvolumes/\" range_end:\"/registry/persistentvolumes0\" limit:500 "}
	
	
	==> kernel <==
	 17:19:55 up 18 min,  0 users,  load average: 1.52, 0.83, 0.40
	Linux ha-286000 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [2677394dcd66] <==
	I0816 17:19:25.224422       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0816 17:19:25.224498       1 main.go:322] Node ha-286000-m02 has CIDR [10.244.1.0/24] 
	I0816 17:19:25.224970       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0816 17:19:25.225069       1 main.go:322] Node ha-286000-m04 has CIDR [10.244.2.0/24] 
	I0816 17:19:25.225527       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0816 17:19:25.225639       1 main.go:299] handling current node
	I0816 17:19:35.223817       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0816 17:19:35.224039       1 main.go:299] handling current node
	I0816 17:19:35.225058       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0816 17:19:35.225414       1 main.go:322] Node ha-286000-m02 has CIDR [10.244.1.0/24] 
	I0816 17:19:35.225677       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0816 17:19:35.225815       1 main.go:322] Node ha-286000-m04 has CIDR [10.244.2.0/24] 
	I0816 17:19:45.231770       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0816 17:19:45.231892       1 main.go:299] handling current node
	I0816 17:19:45.231951       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0816 17:19:45.231982       1 main.go:322] Node ha-286000-m02 has CIDR [10.244.1.0/24] 
	I0816 17:19:45.232347       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0816 17:19:45.232392       1 main.go:322] Node ha-286000-m04 has CIDR [10.244.2.0/24] 
	E0816 17:19:49.084717       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=2678&timeout=6m47s&timeoutSeconds=407&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	I0816 17:19:55.227769       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0816 17:19:55.227907       1 main.go:299] handling current node
	I0816 17:19:55.227947       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0816 17:19:55.227966       1 main.go:322] Node ha-286000-m02 has CIDR [10.244.1.0/24] 
	I0816 17:19:55.228192       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0816 17:19:55.228860       1 main.go:322] Node ha-286000-m04 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [5f3adc202a7a] <==
	I0816 17:19:30.942523       1 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0816 17:19:30.942641       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0816 17:19:30.943084       1 naming_controller.go:305] Shutting down NamingConditionController
	I0816 17:19:30.943119       1 crd_finalizer.go:281] Shutting down CRDFinalizer
	I0816 17:19:30.943128       1 apiapproval_controller.go:201] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
	E0816 17:19:30.943895       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"context canceled\"}: context canceled" logger="UnhandledError"
	I0816 17:19:30.946681       1 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0816 17:19:30.946696       1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	I0816 17:19:30.947109       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0816 17:19:30.947141       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0816 17:19:30.947645       1 dynamic_serving_content.go:149] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	E0816 17:19:30.947973       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E0816 17:19:30.949761       1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0816 17:19:30.950875       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="6.918732ms" method="GET" path="/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/apiserver-nxwuvbfn6oczjrrqlvkf5enpgi" result=null
	E0816 17:19:32.789945       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceeded" logger="UnhandledError"
	I0816 17:19:32.790673       1 controller.go:157] Shutting down quota evaluator
	I0816 17:19:32.790737       1 controller.go:176] quota evaluator worker shutdown
	I0816 17:19:32.790806       1 controller.go:176] quota evaluator worker shutdown
	I0816 17:19:32.790856       1 controller.go:176] quota evaluator worker shutdown
	I0816 17:19:32.790865       1 controller.go:176] quota evaluator worker shutdown
	I0816 17:19:32.790871       1 controller.go:176] quota evaluator worker shutdown
	E0816 17:19:32.792325       1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E0816 17:19:32.794880       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E0816 17:19:32.796732       1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0816 17:19:32.798366       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="8.048882ms" method="GET" path="/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/plndr-cp-lock" result=null
	
	
	==> kube-apiserver [8d7a6d0f9537] <==
	E0816 17:19:55.657409       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: rpctypes.EtcdError{code:0xe, desc:\"etcdserver: request timed out\"}: etcdserver: request timed out" logger="UnhandledError"
	E0816 17:19:55.657571       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: rpctypes.EtcdError{code:0xe, desc:\"etcdserver: request timed out\"}: etcdserver: request timed out" logger="UnhandledError"
	E0816 17:19:55.657576       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: rpctypes.EtcdError{code:0xe, desc:\"etcdserver: request timed out\"}: etcdserver: request timed out" logger="UnhandledError"
	E0816 17:19:55.657682       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: rpctypes.EtcdError{code:0xe, desc:\"etcdserver: request timed out\"}: etcdserver: request timed out" logger="UnhandledError"
	W0816 17:19:55.657728       1 reflector.go:561] storage/cacher.go:/secrets: failed to list *core.Secret: etcdserver: request timed out
	E0816 17:19:55.657794       1 cacher.go:478] cacher (secrets): unexpected ListAndWatch error: failed to list *core.Secret: etcdserver: request timed out; reinitializing...
	W0816 17:19:55.659450       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: etcdserver: request timed out
	E0816 17:19:55.659511       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: etcdserver: request timed out" logger="UnhandledError"
	E0816 17:19:55.659554       1 controller.go:145] "Failed to ensure lease exists, will retry" err="etcdserver: request timed out" interval="400ms"
	W0816 17:19:55.659598       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: etcdserver: request timed out
	E0816 17:19:55.659644       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: etcdserver: request timed out" logger="UnhandledError"
	W0816 17:19:55.659673       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: etcdserver: request timed out
	E0816 17:19:55.659716       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: etcdserver: request timed out" logger="UnhandledError"
	W0816 17:19:55.659956       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: etcdserver: request timed out
	E0816 17:19:55.660007       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: etcdserver: request timed out" logger="UnhandledError"
	W0816 17:19:55.660040       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Role: etcdserver: request timed out
	E0816 17:19:55.660057       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Role: failed to list *v1.Role: etcdserver: request timed out" logger="UnhandledError"
	W0816 17:19:55.660304       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.LimitRange: etcdserver: request timed out
	E0816 17:19:55.660377       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.LimitRange: failed to list *v1.LimitRange: etcdserver: request timed out" logger="UnhandledError"
	W0816 17:19:55.660317       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: etcdserver: request timed out
	E0816 17:19:55.660411       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: etcdserver: request timed out" logger="UnhandledError"
	W0816 17:19:55.714911       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: storage is (re)initializing
	E0816 17:19:55.715036       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: storage is (re)initializing" logger="UnhandledError"
	W0816 17:19:55.760203       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.Lease: storage is (re)initializing
	E0816 17:19:55.760280       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.Lease: failed to list *v1.Lease: storage is (re)initializing" logger="UnhandledError"
	
	
	==> kube-controller-manager [8f5867ee99d9] <==
	W0816 17:19:50.036744       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Role: roles.rbac.authorization.k8s.io is forbidden: User "system:kube-controller-manager" cannot list resource "roles" in API group "rbac.authorization.k8s.io" at the cluster scope
	E0816 17:19:50.036837       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Role: failed to list *v1.Role: roles.rbac.authorization.k8s.io is forbidden: User \"system:kube-controller-manager\" cannot list resource \"roles\" in API group \"rbac.authorization.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 17:19:50.157284       1 client_builder_dynamic.go:197] get or create service account failed: serviceaccounts "node-controller" is forbidden: User "system:kube-controller-manager" cannot get resource "serviceaccounts" in API group "" in the namespace "kube-system"
	W0816 17:19:50.248364       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-controller-manager" cannot list resource "namespaces" in API group "" at the cluster scope
	E0816 17:19:50.248499       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-controller-manager\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0816 17:19:50.340355       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: unknown" logger="UnhandledError"
	W0816 17:19:50.466262       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ServiceAccount: serviceaccounts is forbidden: User "system:kube-controller-manager" cannot list resource "serviceaccounts" in API group "" at the cluster scope
	E0816 17:19:50.466302       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ServiceAccount: failed to list *v1.ServiceAccount: serviceaccounts is forbidden: User \"system:kube-controller-manager\" cannot list resource \"serviceaccounts\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 17:19:50.768347       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-controller-manager" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0816 17:19:50.768668       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-controller-manager\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 17:19:50.910368       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Endpoints: endpoints is forbidden: User "system:kube-controller-manager" cannot list resource "endpoints" in API group "" at the cluster scope
	E0816 17:19:50.910399       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Endpoints: failed to list *v1.Endpoints: endpoints is forbidden: User \"system:kube-controller-manager\" cannot list resource \"endpoints\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 17:19:50.916615       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Job: jobs.batch is forbidden: User "system:kube-controller-manager" cannot list resource "jobs" in API group "batch" at the cluster scope
	E0816 17:19:50.916664       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Job: failed to list *v1.Job: jobs.batch is forbidden: User \"system:kube-controller-manager\" cannot list resource \"jobs\" in API group \"batch\" at the cluster scope" logger="UnhandledError"
	W0816 17:19:52.159046       1 client_builder_dynamic.go:197] get or create service account failed: serviceaccounts "node-controller" is forbidden: User "system:kube-controller-manager" cannot get resource "serviceaccounts" in API group "" in the namespace "kube-system"
	E0816 17:19:52.159408       1 node_lifecycle_controller.go:720] "Failed while getting a Node to retry updating node health. Probably Node was deleted" logger="node-lifecycle-controller" node="ha-286000-m04"
	E0816 17:19:52.159707       1 node_lifecycle_controller.go:725] "Update health of Node from Controller error, Skipping - no pods will be evicted" err="Get \"https://192.169.0.5:8443/api/v1/nodes/ha-286000-m04\": failed to get token for kube-system/node-controller: timed out waiting for the condition" logger="node-lifecycle-controller" node=""
	W0816 17:19:52.160500       1 client_builder_dynamic.go:197] get or create service account failed: serviceaccounts "node-controller" is forbidden: User "system:kube-controller-manager" cannot get resource "serviceaccounts" in API group "" in the namespace "kube-system"
	W0816 17:19:52.662703       1 client_builder_dynamic.go:197] get or create service account failed: serviceaccounts "node-controller" is forbidden: User "system:kube-controller-manager" cannot get resource "serviceaccounts" in API group "" in the namespace "kube-system"
	W0816 17:19:53.664523       1 client_builder_dynamic.go:197] get or create service account failed: serviceaccounts "node-controller" is forbidden: User "system:kube-controller-manager" cannot get resource "serviceaccounts" in API group "" in the namespace "kube-system"
	W0816 17:19:55.666439       1 client_builder_dynamic.go:197] get or create service account failed: serviceaccounts "node-controller" is forbidden: User "system:kube-controller-manager" cannot get resource "serviceaccounts" in API group "" in the namespace "kube-system"
	E0816 17:19:55.666588       1 node_lifecycle_controller.go:720] "Failed while getting a Node to retry updating node health. Probably Node was deleted" logger="node-lifecycle-controller" node="ha-286000-m02"
	E0816 17:19:55.666659       1 node_lifecycle_controller.go:725] "Update health of Node from Controller error, Skipping - no pods will be evicted" err="Get \"https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02\": failed to get token for kube-system/node-controller: timed out waiting for the condition" logger="node-lifecycle-controller" node=""
	W0816 17:19:55.667474       1 client_builder_dynamic.go:197] get or create service account failed: serviceaccounts "node-controller" is forbidden: User "system:kube-controller-manager" cannot get resource "serviceaccounts" in API group "" in the namespace "kube-system"
	W0816 17:19:56.169290       1 client_builder_dynamic.go:197] get or create service account failed: serviceaccounts "node-controller" is forbidden: User "system:kube-controller-manager" cannot get resource "serviceaccounts" in API group "" in the namespace "kube-system"
	
	
	==> kube-proxy [81f6c96d4649] <==
	E0816 17:18:54.621937       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2678\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 17:18:54.622093       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-286000&resourceVersion=2600": dial tcp 192.169.0.254:8443: connect: no route to host
	E0816 17:18:54.622234       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-286000&resourceVersion=2600\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 17:18:57.695127       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2592": dial tcp 192.169.0.254:8443: connect: no route to host
	E0816 17:18:57.696982       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2592\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 17:19:00.770881       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-286000&resourceVersion=2600": dial tcp 192.169.0.254:8443: connect: no route to host
	E0816 17:19:00.770973       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-286000&resourceVersion=2600\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 17:19:00.771455       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2678": dial tcp 192.169.0.254:8443: connect: no route to host
	E0816 17:19:00.771540       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2678\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 17:19:03.838026       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2592": dial tcp 192.169.0.254:8443: connect: no route to host
	E0816 17:19:03.838287       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2592\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 17:19:09.980567       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-286000&resourceVersion=2600": dial tcp 192.169.0.254:8443: connect: no route to host
	E0816 17:19:09.980625       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-286000&resourceVersion=2600\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 17:19:13.053000       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2592": dial tcp 192.169.0.254:8443: connect: no route to host
	E0816 17:19:13.053145       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2592\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 17:19:16.125305       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2678": dial tcp 192.169.0.254:8443: connect: no route to host
	E0816 17:19:16.125738       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2678\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 17:19:28.413017       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-286000&resourceVersion=2600": dial tcp 192.169.0.254:8443: connect: no route to host
	E0816 17:19:28.413242       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-286000&resourceVersion=2600\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 17:19:37.633251       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2592": dial tcp 192.169.0.254:8443: connect: no route to host
	E0816 17:19:37.633353       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2592\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 17:19:37.633417       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2678": dial tcp 192.169.0.254:8443: connect: no route to host
	E0816 17:19:37.633437       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2678\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 17:19:56.059814       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-286000&resourceVersion=2600": dial tcp 192.169.0.254:8443: connect: no route to host
	E0816 17:19:56.059845       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-286000&resourceVersion=2600\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-scheduler [f7b2e9efdd94] <==
	W0816 17:02:23.166962       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0816 17:02:23.167003       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 17:02:23.255186       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 17:02:23.255230       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 17:02:23.456273       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0816 17:02:23.456447       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0816 17:02:26.460413       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0816 17:05:02.827326       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-k9m92\": pod busybox-7dff88458-k9m92 is already assigned to node \"ha-286000-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-k9m92" node="ha-286000-m02"
	E0816 17:05:02.827387       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod a516859c-0da8-4ba9-a896-ac720495818f(default/busybox-7dff88458-k9m92) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-k9m92"
	E0816 17:05:02.827403       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-k9m92\": pod busybox-7dff88458-k9m92 is already assigned to node \"ha-286000-m02\"" pod="default/busybox-7dff88458-k9m92"
	I0816 17:05:02.827520       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-k9m92" node="ha-286000-m02"
	E0816 17:19:45.302989       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)" logger="UnhandledError"
	E0816 17:19:46.606412       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E0816 17:19:46.678347       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E0816 17:19:46.958273       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)" logger="UnhandledError"
	E0816 17:19:47.081696       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	E0816 17:19:47.244382       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)" logger="UnhandledError"
	E0816 17:19:48.073723       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	E0816 17:19:49.684930       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	E0816 17:19:49.735676       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	E0816 17:19:51.347636       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E0816 17:19:52.522518       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E0816 17:19:54.388598       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	E0816 17:19:55.012299       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0816 17:19:56.454535       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	
	
	==> kubelet <==
	Aug 16 17:19:31 ha-286000 kubelet[2114]: E0816 17:19:31.485134    2114 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-286000&resourceVersion=2572\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	Aug 16 17:19:31 ha-286000 kubelet[2114]: I0816 17:19:31.484998    2114 status_manager.go:851] "Failed to get status for pod" podUID="9dfa3b06b26298e967397c0cc0146f44" pod="kube-system/kube-vip-ha-286000" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-vip-ha-286000\": dial tcp 192.169.0.254:8443: connect: no route to host"
	Aug 16 17:19:34 ha-286000 kubelet[2114]: W0816 17:19:34.556180    2114 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=2678": dial tcp 192.169.0.254:8443: connect: no route to host
	Aug 16 17:19:34 ha-286000 kubelet[2114]: E0816 17:19:34.556301    2114 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=2678\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	Aug 16 17:19:34 ha-286000 kubelet[2114]: W0816 17:19:34.556178    2114 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-proxy&resourceVersion=2592": dial tcp 192.169.0.254:8443: connect: no route to host
	Aug 16 17:19:34 ha-286000 kubelet[2114]: E0816 17:19:34.556343    2114 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-proxy&resourceVersion=2592\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	Aug 16 17:19:34 ha-286000 kubelet[2114]: W0816 17:19:34.556335    2114 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=2592": dial tcp 192.169.0.254:8443: connect: no route to host
	Aug 16 17:19:34 ha-286000 kubelet[2114]: E0816 17:19:34.556391    2114 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=2592\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	Aug 16 17:19:34 ha-286000 kubelet[2114]: I0816 17:19:34.556417    2114 status_manager.go:851] "Failed to get status for pod" podUID="ce52583d9f21bda5dffbc86c2c7fce8d" pod="kube-system/etcd-ha-286000" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/etcd-ha-286000\": dial tcp 192.169.0.254:8443: connect: no route to host"
	Aug 16 17:19:34 ha-286000 kubelet[2114]: W0816 17:19:34.556479    2114 reflector.go:561] pkg/kubelet/config/apiserver.go:66: failed to list *v1.Pod: Get "https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dha-286000&resourceVersion=2591": dial tcp 192.169.0.254:8443: connect: no route to host
	Aug 16 17:19:34 ha-286000 kubelet[2114]: E0816 17:19:34.556521    2114 reflector.go:158] "Unhandled Error" err="pkg/kubelet/config/apiserver.go:66: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dha-286000&resourceVersion=2591\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	Aug 16 17:19:37 ha-286000 kubelet[2114]: W0816 17:19:37.630955    2114 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=2477": dial tcp 192.169.0.254:8443: connect: no route to host
	Aug 16 17:19:37 ha-286000 kubelet[2114]: E0816 17:19:37.631487    2114 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=2477\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	Aug 16 17:19:37 ha-286000 kubelet[2114]: I0816 17:19:37.631656    2114 status_manager.go:851] "Failed to get status for pod" podUID="54fd9c91db8add4ea97d383d73f94dbe" pod="kube-system/kube-apiserver-ha-286000" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-286000\": dial tcp 192.169.0.254:8443: connect: no route to host"
	Aug 16 17:19:37 ha-286000 kubelet[2114]: E0816 17:19:37.633115    2114 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-286000?timeout=10s\": dial tcp 192.169.0.254:8443: connect: no route to host" interval="7s"
	Aug 16 17:19:40 ha-286000 kubelet[2114]: E0816 17:19:40.700302    2114 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events\": dial tcp 192.169.0.254:8443: connect: no route to host" event="&Event{ObjectMeta:{kube-apiserver-ha-286000.17ec450d53f6e64c  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ha-286000,UID:54fd9c91db8add4ea97d383d73f94dbe,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ha-286000,},FirstTimestamp:2024-08-16 17:18:00.921638476 +0000 UTC m=+935.425081470,LastTimestamp:2024-08-16 17:18:00.921638476 +0000 UTC m=+935.425081470,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related
:nil,ReportingController:kubelet,ReportingInstance:ha-286000,}"
	Aug 16 17:19:40 ha-286000 kubelet[2114]: I0816 17:19:40.700719    2114 status_manager.go:851] "Failed to get status for pod" podUID="9dfa3b06b26298e967397c0cc0146f44" pod="kube-system/kube-vip-ha-286000" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-vip-ha-286000\": dial tcp 192.169.0.254:8443: connect: no route to host"
	Aug 16 17:19:43 ha-286000 kubelet[2114]: I0816 17:19:43.772778    2114 status_manager.go:851] "Failed to get status for pod" podUID="ce52583d9f21bda5dffbc86c2c7fce8d" pod="kube-system/etcd-ha-286000" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/etcd-ha-286000\": dial tcp 192.169.0.254:8443: connect: no route to host"
	Aug 16 17:19:46 ha-286000 kubelet[2114]: E0816 17:19:46.844493    2114 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-286000?timeout=10s\": dial tcp 192.169.0.254:8443: connect: no route to host" interval="7s"
	Aug 16 17:19:46 ha-286000 kubelet[2114]: I0816 17:19:46.845089    2114 status_manager.go:851] "Failed to get status for pod" podUID="54fd9c91db8add4ea97d383d73f94dbe" pod="kube-system/kube-apiserver-ha-286000" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-286000\": dial tcp 192.169.0.254:8443: connect: no route to host"
	Aug 16 17:19:49 ha-286000 kubelet[2114]: I0816 17:19:49.916037    2114 status_manager.go:851] "Failed to get status for pod" podUID="4805d53b-2db3-4092-a3f2-d4a854e93adc" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.169.0.254:8443: connect: no route to host"
	Aug 16 17:19:52 ha-286000 kubelet[2114]: I0816 17:19:52.987903    2114 status_manager.go:851] "Failed to get status for pod" podUID="54fd9c91db8add4ea97d383d73f94dbe" pod="kube-system/kube-apiserver-ha-286000" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-286000\": dial tcp 192.169.0.254:8443: connect: no route to host"
	Aug 16 17:19:52 ha-286000 kubelet[2114]: E0816 17:19:52.987866    2114 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events\": dial tcp 192.169.0.254:8443: connect: no route to host" event="&Event{ObjectMeta:{kube-apiserver-ha-286000.17ec450d53f6e64c  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ha-286000,UID:54fd9c91db8add4ea97d383d73f94dbe,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ha-286000,},FirstTimestamp:2024-08-16 17:18:00.921638476 +0000 UTC m=+935.425081470,LastTimestamp:2024-08-16 17:18:00.921638476 +0000 UTC m=+935.425081470,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related
:nil,ReportingController:kubelet,ReportingInstance:ha-286000,}"
	Aug 16 17:19:56 ha-286000 kubelet[2114]: I0816 17:19:56.060051    2114 status_manager.go:851] "Failed to get status for pod" podUID="9dfa3b06b26298e967397c0cc0146f44" pod="kube-system/kube-vip-ha-286000" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-vip-ha-286000\": dial tcp 192.169.0.254:8443: connect: no route to host"
	Aug 16 17:19:56 ha-286000 kubelet[2114]: E0816 17:19:56.060231    2114 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-286000?timeout=10s\": dial tcp 192.169.0.254:8443: connect: no route to host" interval="7s"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-286000 -n ha-286000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-286000 -n ha-286000: exit status 2 (17.609092578s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "ha-286000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (64.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (163.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-amd64 -p ha-286000 node start m02 -v=7 --alsologtostderr
E0816 10:20:35.652076    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/functional-373000/client.crt: no such file or directory" logger="UnhandledError"
E0816 10:21:32.739331    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/addons-725000/client.crt: no such file or directory" logger="UnhandledError"
E0816 10:21:58.724015    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/functional-373000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:420: (dbg) Done: out/minikube-darwin-amd64 -p ha-286000 node start m02 -v=7 --alsologtostderr: (1m46.587149155s)
ha_test.go:428: (dbg) Run:  out/minikube-darwin-amd64 -p ha-286000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-286000 status -v=7 --alsologtostderr: exit status 2 (435.189476ms)

                                                
                                                
-- stdout --
	ha-286000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-286000-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-286000-m03
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-286000-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 10:22:00.953987    4457 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:22:00.954200    4457 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:22:00.954206    4457 out.go:358] Setting ErrFile to fd 2...
	I0816 10:22:00.954209    4457 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:22:00.954407    4457 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1276/.minikube/bin
	I0816 10:22:00.954613    4457 out.go:352] Setting JSON to false
	I0816 10:22:00.954633    4457 mustload.go:65] Loading cluster: ha-286000
	I0816 10:22:00.954675    4457 notify.go:220] Checking for updates...
	I0816 10:22:00.954958    4457 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:22:00.954975    4457 status.go:255] checking status of ha-286000 ...
	I0816 10:22:00.955369    4457 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:00.955430    4457 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:00.964369    4457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51828
	I0816 10:22:00.964723    4457 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:00.965153    4457 main.go:141] libmachine: Using API Version  1
	I0816 10:22:00.965162    4457 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:00.965387    4457 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:00.965498    4457 main.go:141] libmachine: (ha-286000) Calling .GetState
	I0816 10:22:00.965583    4457 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:22:00.965660    4457 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:22:00.966620    4457 status.go:330] ha-286000 host status = "Running" (err=<nil>)
	I0816 10:22:00.966640    4457 host.go:66] Checking if "ha-286000" exists ...
	I0816 10:22:00.966882    4457 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:00.966905    4457 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:00.975275    4457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51830
	I0816 10:22:00.975589    4457 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:00.975926    4457 main.go:141] libmachine: Using API Version  1
	I0816 10:22:00.975949    4457 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:00.976192    4457 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:00.976304    4457 main.go:141] libmachine: (ha-286000) Calling .GetIP
	I0816 10:22:00.976385    4457 host.go:66] Checking if "ha-286000" exists ...
	I0816 10:22:00.976636    4457 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:00.976663    4457 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:00.990776    4457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51832
	I0816 10:22:00.991120    4457 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:00.991449    4457 main.go:141] libmachine: Using API Version  1
	I0816 10:22:00.991463    4457 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:00.991672    4457 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:00.991776    4457 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:22:00.991908    4457 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 10:22:00.991928    4457 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:22:00.992008    4457 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:22:00.992090    4457 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:22:00.992165    4457 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:22:00.992238    4457 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:22:01.025699    4457 ssh_runner.go:195] Run: systemctl --version
	I0816 10:22:01.030206    4457 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 10:22:01.041765    4457 kubeconfig.go:125] found "ha-286000" server: "https://192.169.0.254:8443"
	I0816 10:22:01.041789    4457 api_server.go:166] Checking apiserver status ...
	I0816 10:22:01.041831    4457 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 10:22:01.053248    4457 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/7508/cgroup
	W0816 10:22:01.060533    4457 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/7508/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0816 10:22:01.060583    4457 ssh_runner.go:195] Run: ls
	I0816 10:22:01.064007    4457 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0816 10:22:01.068600    4457 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0816 10:22:01.068612    4457 status.go:422] ha-286000 apiserver status = Running (err=<nil>)
	I0816 10:22:01.068621    4457 status.go:257] ha-286000 status: &{Name:ha-286000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 10:22:01.068633    4457 status.go:255] checking status of ha-286000-m02 ...
	I0816 10:22:01.068902    4457 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:01.068922    4457 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:01.077940    4457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51836
	I0816 10:22:01.078277    4457 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:01.078596    4457 main.go:141] libmachine: Using API Version  1
	I0816 10:22:01.078606    4457 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:01.078828    4457 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:01.078936    4457 main.go:141] libmachine: (ha-286000-m02) Calling .GetState
	I0816 10:22:01.079015    4457 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:22:01.079110    4457 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid from json: 4408
	I0816 10:22:01.080072    4457 status.go:330] ha-286000-m02 host status = "Running" (err=<nil>)
	I0816 10:22:01.080081    4457 host.go:66] Checking if "ha-286000-m02" exists ...
	I0816 10:22:01.080338    4457 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:01.080363    4457 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:01.089182    4457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51838
	I0816 10:22:01.089507    4457 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:01.089820    4457 main.go:141] libmachine: Using API Version  1
	I0816 10:22:01.089835    4457 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:01.090045    4457 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:01.090158    4457 main.go:141] libmachine: (ha-286000-m02) Calling .GetIP
	I0816 10:22:01.090241    4457 host.go:66] Checking if "ha-286000-m02" exists ...
	I0816 10:22:01.090484    4457 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:01.090507    4457 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:01.099280    4457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51840
	I0816 10:22:01.099657    4457 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:01.099990    4457 main.go:141] libmachine: Using API Version  1
	I0816 10:22:01.100006    4457 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:01.100226    4457 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:01.100345    4457 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:22:01.100482    4457 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 10:22:01.100494    4457 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:22:01.100585    4457 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:22:01.100665    4457 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:22:01.100743    4457 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:22:01.100824    4457 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/id_rsa Username:docker}
	I0816 10:22:01.131840    4457 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 10:22:01.142375    4457 kubeconfig.go:125] found "ha-286000" server: "https://192.169.0.254:8443"
	I0816 10:22:01.142389    4457 api_server.go:166] Checking apiserver status ...
	I0816 10:22:01.142433    4457 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 10:22:01.153196    4457 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1970/cgroup
	W0816 10:22:01.161653    4457 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1970/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0816 10:22:01.161713    4457 ssh_runner.go:195] Run: ls
	I0816 10:22:01.165243    4457 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0816 10:22:01.168289    4457 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0816 10:22:01.168300    4457 status.go:422] ha-286000-m02 apiserver status = Running (err=<nil>)
	I0816 10:22:01.168308    4457 status.go:257] ha-286000-m02 status: &{Name:ha-286000-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 10:22:01.168318    4457 status.go:255] checking status of ha-286000-m03 ...
	I0816 10:22:01.168576    4457 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:01.168595    4457 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:01.177316    4457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51844
	I0816 10:22:01.177636    4457 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:01.177990    4457 main.go:141] libmachine: Using API Version  1
	I0816 10:22:01.178006    4457 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:01.178217    4457 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:01.178326    4457 main.go:141] libmachine: (ha-286000-m03) Calling .GetState
	I0816 10:22:01.178410    4457 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:22:01.178490    4457 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid from json: 3849
	I0816 10:22:01.179448    4457 status.go:330] ha-286000-m03 host status = "Running" (err=<nil>)
	I0816 10:22:01.179456    4457 host.go:66] Checking if "ha-286000-m03" exists ...
	I0816 10:22:01.179709    4457 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:01.179741    4457 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:01.188504    4457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51846
	I0816 10:22:01.188882    4457 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:01.189229    4457 main.go:141] libmachine: Using API Version  1
	I0816 10:22:01.189244    4457 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:01.189487    4457 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:01.189593    4457 main.go:141] libmachine: (ha-286000-m03) Calling .GetIP
	I0816 10:22:01.189678    4457 host.go:66] Checking if "ha-286000-m03" exists ...
	I0816 10:22:01.189959    4457 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:01.189984    4457 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:01.198866    4457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51848
	I0816 10:22:01.199205    4457 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:01.199538    4457 main.go:141] libmachine: Using API Version  1
	I0816 10:22:01.199549    4457 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:01.199767    4457 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:01.199881    4457 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:22:01.199994    4457 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 10:22:01.200005    4457 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:22:01.200085    4457 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:22:01.200164    4457 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:22:01.200240    4457 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:22:01.200316    4457 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/id_rsa Username:docker}
	I0816 10:22:01.236248    4457 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 10:22:01.247286    4457 kubeconfig.go:125] found "ha-286000" server: "https://192.169.0.254:8443"
	I0816 10:22:01.247300    4457 api_server.go:166] Checking apiserver status ...
	I0816 10:22:01.247336    4457 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 10:22:01.256841    4457 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 10:22:01.256856    4457 status.go:422] ha-286000-m03 apiserver status = Stopped (err=<nil>)
	I0816 10:22:01.256865    4457 status.go:257] ha-286000-m03 status: &{Name:ha-286000-m03 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 10:22:01.256876    4457 status.go:255] checking status of ha-286000-m04 ...
	I0816 10:22:01.257164    4457 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:01.257185    4457 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:01.265892    4457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51851
	I0816 10:22:01.266225    4457 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:01.266575    4457 main.go:141] libmachine: Using API Version  1
	I0816 10:22:01.266588    4457 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:01.266806    4457 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:01.266904    4457 main.go:141] libmachine: (ha-286000-m04) Calling .GetState
	I0816 10:22:01.266982    4457 main.go:141] libmachine: (ha-286000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:22:01.267064    4457 main.go:141] libmachine: (ha-286000-m04) DBG | hyperkit pid from json: 4248
	I0816 10:22:01.268044    4457 status.go:330] ha-286000-m04 host status = "Running" (err=<nil>)
	I0816 10:22:01.268052    4457 host.go:66] Checking if "ha-286000-m04" exists ...
	I0816 10:22:01.268291    4457 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:01.268310    4457 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:01.276952    4457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51853
	I0816 10:22:01.277292    4457 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:01.277607    4457 main.go:141] libmachine: Using API Version  1
	I0816 10:22:01.277619    4457 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:01.277846    4457 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:01.277953    4457 main.go:141] libmachine: (ha-286000-m04) Calling .GetIP
	I0816 10:22:01.278043    4457 host.go:66] Checking if "ha-286000-m04" exists ...
	I0816 10:22:01.278280    4457 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:01.278300    4457 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:01.286853    4457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51855
	I0816 10:22:01.287193    4457 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:01.287566    4457 main.go:141] libmachine: Using API Version  1
	I0816 10:22:01.287582    4457 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:01.287804    4457 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:01.287923    4457 main.go:141] libmachine: (ha-286000-m04) Calling .DriverName
	I0816 10:22:01.288050    4457 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 10:22:01.288061    4457 main.go:141] libmachine: (ha-286000-m04) Calling .GetSSHHostname
	I0816 10:22:01.288133    4457 main.go:141] libmachine: (ha-286000-m04) Calling .GetSSHPort
	I0816 10:22:01.288198    4457 main.go:141] libmachine: (ha-286000-m04) Calling .GetSSHKeyPath
	I0816 10:22:01.288311    4457 main.go:141] libmachine: (ha-286000-m04) Calling .GetSSHUsername
	I0816 10:22:01.288397    4457 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m04/id_rsa Username:docker}
	I0816 10:22:01.321243    4457 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 10:22:01.332674    4457 status.go:257] ha-286000-m04 status: &{Name:ha-286000-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-amd64 -p ha-286000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-286000 status -v=7 --alsologtostderr: exit status 2 (445.098839ms)

                                                
                                                
-- stdout --
	ha-286000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-286000-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-286000-m03
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-286000-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 10:22:02.844741    4471 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:22:02.854322    4471 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:22:02.854342    4471 out.go:358] Setting ErrFile to fd 2...
	I0816 10:22:02.854350    4471 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:22:02.854755    4471 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1276/.minikube/bin
	I0816 10:22:02.855135    4471 out.go:352] Setting JSON to false
	I0816 10:22:02.855173    4471 mustload.go:65] Loading cluster: ha-286000
	I0816 10:22:02.855271    4471 notify.go:220] Checking for updates...
	I0816 10:22:02.855760    4471 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:22:02.855782    4471 status.go:255] checking status of ha-286000 ...
	I0816 10:22:02.856362    4471 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:02.856439    4471 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:02.865966    4471 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51859
	I0816 10:22:02.866365    4471 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:02.866790    4471 main.go:141] libmachine: Using API Version  1
	I0816 10:22:02.866800    4471 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:02.867028    4471 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:02.867142    4471 main.go:141] libmachine: (ha-286000) Calling .GetState
	I0816 10:22:02.867235    4471 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:22:02.867316    4471 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:22:02.868311    4471 status.go:330] ha-286000 host status = "Running" (err=<nil>)
	I0816 10:22:02.868330    4471 host.go:66] Checking if "ha-286000" exists ...
	I0816 10:22:02.868604    4471 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:02.868644    4471 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:02.877173    4471 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51861
	I0816 10:22:02.877493    4471 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:02.877842    4471 main.go:141] libmachine: Using API Version  1
	I0816 10:22:02.877856    4471 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:02.878088    4471 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:02.878196    4471 main.go:141] libmachine: (ha-286000) Calling .GetIP
	I0816 10:22:02.878283    4471 host.go:66] Checking if "ha-286000" exists ...
	I0816 10:22:02.878548    4471 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:02.878573    4471 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:02.891231    4471 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51863
	I0816 10:22:02.891568    4471 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:02.891886    4471 main.go:141] libmachine: Using API Version  1
	I0816 10:22:02.891897    4471 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:02.892091    4471 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:02.892185    4471 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:22:02.892311    4471 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 10:22:02.892329    4471 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:22:02.892401    4471 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:22:02.892473    4471 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:22:02.892554    4471 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:22:02.892682    4471 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:22:02.925798    4471 ssh_runner.go:195] Run: systemctl --version
	I0816 10:22:02.930218    4471 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 10:22:02.942628    4471 kubeconfig.go:125] found "ha-286000" server: "https://192.169.0.254:8443"
	I0816 10:22:02.942652    4471 api_server.go:166] Checking apiserver status ...
	I0816 10:22:02.942694    4471 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 10:22:02.954851    4471 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/7508/cgroup
	W0816 10:22:02.962917    4471 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/7508/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0816 10:22:02.962960    4471 ssh_runner.go:195] Run: ls
	I0816 10:22:02.966139    4471 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0816 10:22:02.969436    4471 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0816 10:22:02.969449    4471 status.go:422] ha-286000 apiserver status = Running (err=<nil>)
	I0816 10:22:02.969458    4471 status.go:257] ha-286000 status: &{Name:ha-286000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 10:22:02.969475    4471 status.go:255] checking status of ha-286000-m02 ...
	I0816 10:22:02.969728    4471 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:02.969751    4471 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:02.978391    4471 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51867
	I0816 10:22:02.978725    4471 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:02.979047    4471 main.go:141] libmachine: Using API Version  1
	I0816 10:22:02.979058    4471 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:02.979272    4471 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:02.979381    4471 main.go:141] libmachine: (ha-286000-m02) Calling .GetState
	I0816 10:22:02.979462    4471 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:22:02.979536    4471 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid from json: 4408
	I0816 10:22:02.980496    4471 status.go:330] ha-286000-m02 host status = "Running" (err=<nil>)
	I0816 10:22:02.980504    4471 host.go:66] Checking if "ha-286000-m02" exists ...
	I0816 10:22:02.980749    4471 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:02.980771    4471 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:02.989408    4471 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51869
	I0816 10:22:02.989739    4471 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:02.990069    4471 main.go:141] libmachine: Using API Version  1
	I0816 10:22:02.990086    4471 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:02.990310    4471 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:02.990446    4471 main.go:141] libmachine: (ha-286000-m02) Calling .GetIP
	I0816 10:22:02.990537    4471 host.go:66] Checking if "ha-286000-m02" exists ...
	I0816 10:22:02.990794    4471 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:02.990838    4471 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:02.999326    4471 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51871
	I0816 10:22:02.999686    4471 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:03.000019    4471 main.go:141] libmachine: Using API Version  1
	I0816 10:22:03.000031    4471 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:03.000272    4471 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:03.000400    4471 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:22:03.000536    4471 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 10:22:03.000549    4471 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:22:03.000644    4471 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:22:03.000763    4471 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:22:03.000847    4471 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:22:03.000931    4471 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/id_rsa Username:docker}
	I0816 10:22:03.030842    4471 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 10:22:03.041553    4471 kubeconfig.go:125] found "ha-286000" server: "https://192.169.0.254:8443"
	I0816 10:22:03.041571    4471 api_server.go:166] Checking apiserver status ...
	I0816 10:22:03.041622    4471 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 10:22:03.052910    4471 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1970/cgroup
	W0816 10:22:03.060589    4471 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1970/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0816 10:22:03.060638    4471 ssh_runner.go:195] Run: ls
	I0816 10:22:03.063970    4471 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0816 10:22:03.066985    4471 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0816 10:22:03.066995    4471 status.go:422] ha-286000-m02 apiserver status = Running (err=<nil>)
	I0816 10:22:03.067004    4471 status.go:257] ha-286000-m02 status: &{Name:ha-286000-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 10:22:03.067016    4471 status.go:255] checking status of ha-286000-m03 ...
	I0816 10:22:03.067260    4471 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:03.067283    4471 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:03.076350    4471 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51875
	I0816 10:22:03.076692    4471 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:03.077038    4471 main.go:141] libmachine: Using API Version  1
	I0816 10:22:03.077052    4471 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:03.077270    4471 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:03.077382    4471 main.go:141] libmachine: (ha-286000-m03) Calling .GetState
	I0816 10:22:03.077463    4471 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:22:03.077537    4471 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid from json: 3849
	I0816 10:22:03.078530    4471 status.go:330] ha-286000-m03 host status = "Running" (err=<nil>)
	I0816 10:22:03.078538    4471 host.go:66] Checking if "ha-286000-m03" exists ...
	I0816 10:22:03.078797    4471 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:03.078824    4471 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:03.087539    4471 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51877
	I0816 10:22:03.087889    4471 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:03.088226    4471 main.go:141] libmachine: Using API Version  1
	I0816 10:22:03.088239    4471 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:03.088431    4471 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:03.088531    4471 main.go:141] libmachine: (ha-286000-m03) Calling .GetIP
	I0816 10:22:03.088608    4471 host.go:66] Checking if "ha-286000-m03" exists ...
	I0816 10:22:03.088883    4471 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:03.088910    4471 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:03.097464    4471 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51879
	I0816 10:22:03.097835    4471 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:03.098140    4471 main.go:141] libmachine: Using API Version  1
	I0816 10:22:03.098147    4471 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:03.098374    4471 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:03.098498    4471 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:22:03.098628    4471 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 10:22:03.098639    4471 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:22:03.098733    4471 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:22:03.098816    4471 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:22:03.098895    4471 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:22:03.098960    4471 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/id_rsa Username:docker}
	I0816 10:22:03.136040    4471 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 10:22:03.146589    4471 kubeconfig.go:125] found "ha-286000" server: "https://192.169.0.254:8443"
	I0816 10:22:03.146605    4471 api_server.go:166] Checking apiserver status ...
	I0816 10:22:03.146646    4471 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 10:22:03.156179    4471 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 10:22:03.156189    4471 status.go:422] ha-286000-m03 apiserver status = Stopped (err=<nil>)
	I0816 10:22:03.156197    4471 status.go:257] ha-286000-m03 status: &{Name:ha-286000-m03 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 10:22:03.156206    4471 status.go:255] checking status of ha-286000-m04 ...
	I0816 10:22:03.156475    4471 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:03.156498    4471 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:03.165236    4471 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51882
	I0816 10:22:03.165594    4471 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:03.165919    4471 main.go:141] libmachine: Using API Version  1
	I0816 10:22:03.165928    4471 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:03.166128    4471 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:03.166240    4471 main.go:141] libmachine: (ha-286000-m04) Calling .GetState
	I0816 10:22:03.166335    4471 main.go:141] libmachine: (ha-286000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:22:03.166400    4471 main.go:141] libmachine: (ha-286000-m04) DBG | hyperkit pid from json: 4248
	I0816 10:22:03.167413    4471 status.go:330] ha-286000-m04 host status = "Running" (err=<nil>)
	I0816 10:22:03.167423    4471 host.go:66] Checking if "ha-286000-m04" exists ...
	I0816 10:22:03.167676    4471 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:03.167704    4471 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:03.176281    4471 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51884
	I0816 10:22:03.176626    4471 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:03.176985    4471 main.go:141] libmachine: Using API Version  1
	I0816 10:22:03.177002    4471 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:03.177193    4471 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:03.177285    4471 main.go:141] libmachine: (ha-286000-m04) Calling .GetIP
	I0816 10:22:03.177363    4471 host.go:66] Checking if "ha-286000-m04" exists ...
	I0816 10:22:03.177618    4471 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:03.177640    4471 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:03.186139    4471 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51886
	I0816 10:22:03.186472    4471 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:03.186830    4471 main.go:141] libmachine: Using API Version  1
	I0816 10:22:03.186848    4471 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:03.187049    4471 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:03.187149    4471 main.go:141] libmachine: (ha-286000-m04) Calling .DriverName
	I0816 10:22:03.187288    4471 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 10:22:03.187305    4471 main.go:141] libmachine: (ha-286000-m04) Calling .GetSSHHostname
	I0816 10:22:03.187381    4471 main.go:141] libmachine: (ha-286000-m04) Calling .GetSSHPort
	I0816 10:22:03.187463    4471 main.go:141] libmachine: (ha-286000-m04) Calling .GetSSHKeyPath
	I0816 10:22:03.187546    4471 main.go:141] libmachine: (ha-286000-m04) Calling .GetSSHUsername
	I0816 10:22:03.187625    4471 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m04/id_rsa Username:docker}
	I0816 10:22:03.220622    4471 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 10:22:03.231782    4471 status.go:257] ha-286000-m04 status: &{Name:ha-286000-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-amd64 -p ha-286000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-286000 status -v=7 --alsologtostderr: exit status 2 (463.782431ms)

                                                
                                                
-- stdout --
	ha-286000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-286000-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-286000-m03
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-286000-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 10:22:05.463989    4489 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:22:05.464289    4489 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:22:05.464294    4489 out.go:358] Setting ErrFile to fd 2...
	I0816 10:22:05.464298    4489 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:22:05.464485    4489 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1276/.minikube/bin
	I0816 10:22:05.464670    4489 out.go:352] Setting JSON to false
	I0816 10:22:05.464697    4489 mustload.go:65] Loading cluster: ha-286000
	I0816 10:22:05.464734    4489 notify.go:220] Checking for updates...
	I0816 10:22:05.465002    4489 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:22:05.465021    4489 status.go:255] checking status of ha-286000 ...
	I0816 10:22:05.465367    4489 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:05.465413    4489 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:05.474179    4489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51890
	I0816 10:22:05.474523    4489 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:05.474926    4489 main.go:141] libmachine: Using API Version  1
	I0816 10:22:05.474942    4489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:05.475174    4489 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:05.475278    4489 main.go:141] libmachine: (ha-286000) Calling .GetState
	I0816 10:22:05.475364    4489 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:22:05.475443    4489 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:22:05.476448    4489 status.go:330] ha-286000 host status = "Running" (err=<nil>)
	I0816 10:22:05.476469    4489 host.go:66] Checking if "ha-286000" exists ...
	I0816 10:22:05.476722    4489 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:05.476743    4489 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:05.485153    4489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51892
	I0816 10:22:05.485477    4489 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:05.485860    4489 main.go:141] libmachine: Using API Version  1
	I0816 10:22:05.485885    4489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:05.486126    4489 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:05.486241    4489 main.go:141] libmachine: (ha-286000) Calling .GetIP
	I0816 10:22:05.486319    4489 host.go:66] Checking if "ha-286000" exists ...
	I0816 10:22:05.486570    4489 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:05.486598    4489 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:05.499321    4489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51894
	I0816 10:22:05.499677    4489 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:05.500002    4489 main.go:141] libmachine: Using API Version  1
	I0816 10:22:05.500014    4489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:05.500218    4489 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:05.500329    4489 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:22:05.500464    4489 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 10:22:05.500484    4489 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:22:05.500562    4489 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:22:05.500658    4489 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:22:05.500769    4489 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:22:05.500863    4489 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:22:05.533897    4489 ssh_runner.go:195] Run: systemctl --version
	I0816 10:22:05.538192    4489 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 10:22:05.548567    4489 kubeconfig.go:125] found "ha-286000" server: "https://192.169.0.254:8443"
	I0816 10:22:05.548589    4489 api_server.go:166] Checking apiserver status ...
	I0816 10:22:05.548626    4489 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 10:22:05.559504    4489 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/7508/cgroup
	W0816 10:22:05.566579    4489 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/7508/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0816 10:22:05.566622    4489 ssh_runner.go:195] Run: ls
	I0816 10:22:05.573083    4489 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0816 10:22:05.588987    4489 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0816 10:22:05.589006    4489 status.go:422] ha-286000 apiserver status = Running (err=<nil>)
	I0816 10:22:05.589019    4489 status.go:257] ha-286000 status: &{Name:ha-286000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 10:22:05.589034    4489 status.go:255] checking status of ha-286000-m02 ...
	I0816 10:22:05.589308    4489 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:05.589337    4489 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:05.599348    4489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51898
	I0816 10:22:05.599790    4489 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:05.600199    4489 main.go:141] libmachine: Using API Version  1
	I0816 10:22:05.600217    4489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:05.600472    4489 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:05.600605    4489 main.go:141] libmachine: (ha-286000-m02) Calling .GetState
	I0816 10:22:05.600723    4489 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:22:05.600825    4489 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid from json: 4408
	I0816 10:22:05.601939    4489 status.go:330] ha-286000-m02 host status = "Running" (err=<nil>)
	I0816 10:22:05.601953    4489 host.go:66] Checking if "ha-286000-m02" exists ...
	I0816 10:22:05.602236    4489 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:05.602269    4489 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:05.612087    4489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51900
	I0816 10:22:05.612519    4489 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:05.612976    4489 main.go:141] libmachine: Using API Version  1
	I0816 10:22:05.613001    4489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:05.613257    4489 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:05.613419    4489 main.go:141] libmachine: (ha-286000-m02) Calling .GetIP
	I0816 10:22:05.613533    4489 host.go:66] Checking if "ha-286000-m02" exists ...
	I0816 10:22:05.613835    4489 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:05.613860    4489 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:05.623765    4489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51902
	I0816 10:22:05.624168    4489 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:05.624553    4489 main.go:141] libmachine: Using API Version  1
	I0816 10:22:05.624565    4489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:05.624822    4489 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:05.624963    4489 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:22:05.625135    4489 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 10:22:05.625150    4489 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:22:05.625267    4489 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:22:05.625382    4489 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:22:05.625519    4489 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:22:05.625647    4489 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/id_rsa Username:docker}
	I0816 10:22:05.658583    4489 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 10:22:05.677039    4489 kubeconfig.go:125] found "ha-286000" server: "https://192.169.0.254:8443"
	I0816 10:22:05.677054    4489 api_server.go:166] Checking apiserver status ...
	I0816 10:22:05.677093    4489 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 10:22:05.691135    4489 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1970/cgroup
	W0816 10:22:05.698434    4489 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1970/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0816 10:22:05.698481    4489 ssh_runner.go:195] Run: ls
	I0816 10:22:05.701958    4489 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0816 10:22:05.705606    4489 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0816 10:22:05.705617    4489 status.go:422] ha-286000-m02 apiserver status = Running (err=<nil>)
	I0816 10:22:05.705625    4489 status.go:257] ha-286000-m02 status: &{Name:ha-286000-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 10:22:05.705635    4489 status.go:255] checking status of ha-286000-m03 ...
	I0816 10:22:05.705890    4489 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:05.705909    4489 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:05.714572    4489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51906
	I0816 10:22:05.714925    4489 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:05.715300    4489 main.go:141] libmachine: Using API Version  1
	I0816 10:22:05.715317    4489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:05.715513    4489 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:05.715622    4489 main.go:141] libmachine: (ha-286000-m03) Calling .GetState
	I0816 10:22:05.715692    4489 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:22:05.715777    4489 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid from json: 3849
	I0816 10:22:05.716789    4489 status.go:330] ha-286000-m03 host status = "Running" (err=<nil>)
	I0816 10:22:05.716799    4489 host.go:66] Checking if "ha-286000-m03" exists ...
	I0816 10:22:05.717042    4489 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:05.717069    4489 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:05.725724    4489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51908
	I0816 10:22:05.726066    4489 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:05.726403    4489 main.go:141] libmachine: Using API Version  1
	I0816 10:22:05.726414    4489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:05.726602    4489 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:05.726717    4489 main.go:141] libmachine: (ha-286000-m03) Calling .GetIP
	I0816 10:22:05.726797    4489 host.go:66] Checking if "ha-286000-m03" exists ...
	I0816 10:22:05.727050    4489 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:05.727072    4489 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:05.735634    4489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51910
	I0816 10:22:05.736000    4489 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:05.736349    4489 main.go:141] libmachine: Using API Version  1
	I0816 10:22:05.736365    4489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:05.736568    4489 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:05.736675    4489 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:22:05.736803    4489 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 10:22:05.736814    4489 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:22:05.736905    4489 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:22:05.736987    4489 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:22:05.737075    4489 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:22:05.737149    4489 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/id_rsa Username:docker}
	I0816 10:22:05.773133    4489 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 10:22:05.783834    4489 kubeconfig.go:125] found "ha-286000" server: "https://192.169.0.254:8443"
	I0816 10:22:05.783847    4489 api_server.go:166] Checking apiserver status ...
	I0816 10:22:05.783886    4489 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 10:22:05.793672    4489 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 10:22:05.793683    4489 status.go:422] ha-286000-m03 apiserver status = Stopped (err=<nil>)
	I0816 10:22:05.793693    4489 status.go:257] ha-286000-m03 status: &{Name:ha-286000-m03 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 10:22:05.793703    4489 status.go:255] checking status of ha-286000-m04 ...
	I0816 10:22:05.793976    4489 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:05.793996    4489 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:05.802856    4489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51913
	I0816 10:22:05.803209    4489 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:05.803547    4489 main.go:141] libmachine: Using API Version  1
	I0816 10:22:05.803566    4489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:05.803757    4489 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:05.803866    4489 main.go:141] libmachine: (ha-286000-m04) Calling .GetState
	I0816 10:22:05.803943    4489 main.go:141] libmachine: (ha-286000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:22:05.804019    4489 main.go:141] libmachine: (ha-286000-m04) DBG | hyperkit pid from json: 4248
	I0816 10:22:05.805042    4489 status.go:330] ha-286000-m04 host status = "Running" (err=<nil>)
	I0816 10:22:05.805052    4489 host.go:66] Checking if "ha-286000-m04" exists ...
	I0816 10:22:05.805287    4489 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:05.805312    4489 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:05.813892    4489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51915
	I0816 10:22:05.814359    4489 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:05.814723    4489 main.go:141] libmachine: Using API Version  1
	I0816 10:22:05.814741    4489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:05.814958    4489 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:05.815083    4489 main.go:141] libmachine: (ha-286000-m04) Calling .GetIP
	I0816 10:22:05.815175    4489 host.go:66] Checking if "ha-286000-m04" exists ...
	I0816 10:22:05.815418    4489 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:05.815442    4489 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:05.824051    4489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51917
	I0816 10:22:05.824397    4489 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:05.824717    4489 main.go:141] libmachine: Using API Version  1
	I0816 10:22:05.824727    4489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:05.824962    4489 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:05.825070    4489 main.go:141] libmachine: (ha-286000-m04) Calling .DriverName
	I0816 10:22:05.825193    4489 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 10:22:05.825203    4489 main.go:141] libmachine: (ha-286000-m04) Calling .GetSSHHostname
	I0816 10:22:05.825277    4489 main.go:141] libmachine: (ha-286000-m04) Calling .GetSSHPort
	I0816 10:22:05.825350    4489 main.go:141] libmachine: (ha-286000-m04) Calling .GetSSHKeyPath
	I0816 10:22:05.825425    4489 main.go:141] libmachine: (ha-286000-m04) Calling .GetSSHUsername
	I0816 10:22:05.825495    4489 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m04/id_rsa Username:docker}
	I0816 10:22:05.858836    4489 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 10:22:05.870044    4489 status.go:257] ha-286000-m04 status: &{Name:ha-286000-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-amd64 -p ha-286000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-286000 status -v=7 --alsologtostderr: exit status 2 (456.593566ms)

                                                
                                                
-- stdout --
	ha-286000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-286000-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-286000-m03
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-286000-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 10:22:08.643926    4505 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:22:08.644141    4505 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:22:08.644146    4505 out.go:358] Setting ErrFile to fd 2...
	I0816 10:22:08.644149    4505 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:22:08.644339    4505 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1276/.minikube/bin
	I0816 10:22:08.644525    4505 out.go:352] Setting JSON to false
	I0816 10:22:08.644548    4505 mustload.go:65] Loading cluster: ha-286000
	I0816 10:22:08.644590    4505 notify.go:220] Checking for updates...
	I0816 10:22:08.644864    4505 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:22:08.644884    4505 status.go:255] checking status of ha-286000 ...
	I0816 10:22:08.645241    4505 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:08.645286    4505 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:08.654290    4505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51921
	I0816 10:22:08.654635    4505 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:08.655043    4505 main.go:141] libmachine: Using API Version  1
	I0816 10:22:08.655054    4505 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:08.655254    4505 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:08.655363    4505 main.go:141] libmachine: (ha-286000) Calling .GetState
	I0816 10:22:08.655436    4505 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:22:08.655519    4505 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:22:08.656533    4505 status.go:330] ha-286000 host status = "Running" (err=<nil>)
	I0816 10:22:08.656553    4505 host.go:66] Checking if "ha-286000" exists ...
	I0816 10:22:08.656797    4505 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:08.656818    4505 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:08.665504    4505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51923
	I0816 10:22:08.665859    4505 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:08.666235    4505 main.go:141] libmachine: Using API Version  1
	I0816 10:22:08.666250    4505 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:08.666458    4505 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:08.666561    4505 main.go:141] libmachine: (ha-286000) Calling .GetIP
	I0816 10:22:08.666657    4505 host.go:66] Checking if "ha-286000" exists ...
	I0816 10:22:08.666919    4505 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:08.666957    4505 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:08.678314    4505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51925
	I0816 10:22:08.678703    4505 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:08.679039    4505 main.go:141] libmachine: Using API Version  1
	I0816 10:22:08.679048    4505 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:08.679324    4505 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:08.679470    4505 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:22:08.679627    4505 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 10:22:08.679654    4505 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:22:08.679744    4505 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:22:08.679844    4505 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:22:08.679957    4505 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:22:08.680053    4505 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:22:08.728810    4505 ssh_runner.go:195] Run: systemctl --version
	I0816 10:22:08.738824    4505 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 10:22:08.750728    4505 kubeconfig.go:125] found "ha-286000" server: "https://192.169.0.254:8443"
	I0816 10:22:08.750750    4505 api_server.go:166] Checking apiserver status ...
	I0816 10:22:08.750798    4505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 10:22:08.763584    4505 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/7508/cgroup
	W0816 10:22:08.771759    4505 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/7508/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0816 10:22:08.771811    4505 ssh_runner.go:195] Run: ls
	I0816 10:22:08.776307    4505 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0816 10:22:08.780907    4505 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0816 10:22:08.780921    4505 status.go:422] ha-286000 apiserver status = Running (err=<nil>)
	I0816 10:22:08.780930    4505 status.go:257] ha-286000 status: &{Name:ha-286000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 10:22:08.780945    4505 status.go:255] checking status of ha-286000-m02 ...
	I0816 10:22:08.781206    4505 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:08.781228    4505 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:08.790124    4505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51929
	I0816 10:22:08.790473    4505 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:08.790783    4505 main.go:141] libmachine: Using API Version  1
	I0816 10:22:08.790795    4505 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:08.791006    4505 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:08.791107    4505 main.go:141] libmachine: (ha-286000-m02) Calling .GetState
	I0816 10:22:08.791189    4505 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:22:08.791266    4505 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid from json: 4408
	I0816 10:22:08.792289    4505 status.go:330] ha-286000-m02 host status = "Running" (err=<nil>)
	I0816 10:22:08.792299    4505 host.go:66] Checking if "ha-286000-m02" exists ...
	I0816 10:22:08.792544    4505 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:08.792566    4505 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:08.801406    4505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51931
	I0816 10:22:08.801764    4505 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:08.802079    4505 main.go:141] libmachine: Using API Version  1
	I0816 10:22:08.802089    4505 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:08.802296    4505 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:08.802395    4505 main.go:141] libmachine: (ha-286000-m02) Calling .GetIP
	I0816 10:22:08.802469    4505 host.go:66] Checking if "ha-286000-m02" exists ...
	I0816 10:22:08.802711    4505 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:08.802731    4505 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:08.811484    4505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51933
	I0816 10:22:08.811867    4505 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:08.812203    4505 main.go:141] libmachine: Using API Version  1
	I0816 10:22:08.812221    4505 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:08.812430    4505 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:08.812534    4505 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:22:08.812681    4505 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 10:22:08.812702    4505 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:22:08.812795    4505 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:22:08.812873    4505 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:22:08.812972    4505 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:22:08.813055    4505 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/id_rsa Username:docker}
	I0816 10:22:08.842322    4505 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 10:22:08.853295    4505 kubeconfig.go:125] found "ha-286000" server: "https://192.169.0.254:8443"
	I0816 10:22:08.853319    4505 api_server.go:166] Checking apiserver status ...
	I0816 10:22:08.853360    4505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 10:22:08.864089    4505 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1970/cgroup
	W0816 10:22:08.871179    4505 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1970/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0816 10:22:08.871235    4505 ssh_runner.go:195] Run: ls
	I0816 10:22:08.874507    4505 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0816 10:22:08.877464    4505 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0816 10:22:08.877475    4505 status.go:422] ha-286000-m02 apiserver status = Running (err=<nil>)
	I0816 10:22:08.877482    4505 status.go:257] ha-286000-m02 status: &{Name:ha-286000-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 10:22:08.877492    4505 status.go:255] checking status of ha-286000-m03 ...
	I0816 10:22:08.877746    4505 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:08.877767    4505 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:08.886627    4505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51937
	I0816 10:22:08.886961    4505 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:08.887297    4505 main.go:141] libmachine: Using API Version  1
	I0816 10:22:08.887310    4505 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:08.887528    4505 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:08.887626    4505 main.go:141] libmachine: (ha-286000-m03) Calling .GetState
	I0816 10:22:08.887700    4505 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:22:08.887780    4505 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid from json: 3849
	I0816 10:22:08.888832    4505 status.go:330] ha-286000-m03 host status = "Running" (err=<nil>)
	I0816 10:22:08.888842    4505 host.go:66] Checking if "ha-286000-m03" exists ...
	I0816 10:22:08.889107    4505 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:08.889132    4505 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:08.897819    4505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51939
	I0816 10:22:08.898169    4505 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:08.898509    4505 main.go:141] libmachine: Using API Version  1
	I0816 10:22:08.898520    4505 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:08.898747    4505 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:08.898865    4505 main.go:141] libmachine: (ha-286000-m03) Calling .GetIP
	I0816 10:22:08.898946    4505 host.go:66] Checking if "ha-286000-m03" exists ...
	I0816 10:22:08.899206    4505 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:08.899230    4505 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:08.907925    4505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51941
	I0816 10:22:08.908252    4505 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:08.908578    4505 main.go:141] libmachine: Using API Version  1
	I0816 10:22:08.908595    4505 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:08.908823    4505 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:08.908930    4505 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:22:08.909062    4505 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 10:22:08.909074    4505 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:22:08.909177    4505 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:22:08.909259    4505 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:22:08.909343    4505 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:22:08.909426    4505 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/id_rsa Username:docker}
	I0816 10:22:08.947504    4505 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 10:22:08.957932    4505 kubeconfig.go:125] found "ha-286000" server: "https://192.169.0.254:8443"
	I0816 10:22:08.957945    4505 api_server.go:166] Checking apiserver status ...
	I0816 10:22:08.957987    4505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 10:22:08.967357    4505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 10:22:08.967368    4505 status.go:422] ha-286000-m03 apiserver status = Stopped (err=<nil>)
	I0816 10:22:08.967377    4505 status.go:257] ha-286000-m03 status: &{Name:ha-286000-m03 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 10:22:08.967393    4505 status.go:255] checking status of ha-286000-m04 ...
	I0816 10:22:08.967648    4505 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:08.967669    4505 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:08.976417    4505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51944
	I0816 10:22:08.976753    4505 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:08.977082    4505 main.go:141] libmachine: Using API Version  1
	I0816 10:22:08.977095    4505 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:08.977318    4505 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:08.977428    4505 main.go:141] libmachine: (ha-286000-m04) Calling .GetState
	I0816 10:22:08.977544    4505 main.go:141] libmachine: (ha-286000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:22:08.977594    4505 main.go:141] libmachine: (ha-286000-m04) DBG | hyperkit pid from json: 4248
	I0816 10:22:08.978626    4505 status.go:330] ha-286000-m04 host status = "Running" (err=<nil>)
	I0816 10:22:08.978635    4505 host.go:66] Checking if "ha-286000-m04" exists ...
	I0816 10:22:08.978877    4505 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:08.978905    4505 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:08.987589    4505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51946
	I0816 10:22:08.987933    4505 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:08.988255    4505 main.go:141] libmachine: Using API Version  1
	I0816 10:22:08.988268    4505 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:08.988489    4505 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:08.988600    4505 main.go:141] libmachine: (ha-286000-m04) Calling .GetIP
	I0816 10:22:08.988681    4505 host.go:66] Checking if "ha-286000-m04" exists ...
	I0816 10:22:08.988934    4505 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:08.988956    4505 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:08.997745    4505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51948
	I0816 10:22:08.998090    4505 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:08.998422    4505 main.go:141] libmachine: Using API Version  1
	I0816 10:22:08.998436    4505 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:08.998634    4505 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:08.998738    4505 main.go:141] libmachine: (ha-286000-m04) Calling .DriverName
	I0816 10:22:08.998859    4505 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 10:22:08.998870    4505 main.go:141] libmachine: (ha-286000-m04) Calling .GetSSHHostname
	I0816 10:22:08.998942    4505 main.go:141] libmachine: (ha-286000-m04) Calling .GetSSHPort
	I0816 10:22:08.999020    4505 main.go:141] libmachine: (ha-286000-m04) Calling .GetSSHKeyPath
	I0816 10:22:08.999129    4505 main.go:141] libmachine: (ha-286000-m04) Calling .GetSSHUsername
	I0816 10:22:08.999225    4505 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m04/id_rsa Username:docker}
	I0816 10:22:09.031629    4505 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 10:22:09.042679    4505 status.go:257] ha-286000-m04 status: &{Name:ha-286000-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-amd64 -p ha-286000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-286000 status -v=7 --alsologtostderr: exit status 2 (435.911115ms)

                                                
                                                
-- stdout --
	ha-286000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-286000-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-286000-m03
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-286000-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 10:22:12.813566    4519 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:22:12.814003    4519 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:22:12.814011    4519 out.go:358] Setting ErrFile to fd 2...
	I0816 10:22:12.814015    4519 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:22:12.814187    4519 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1276/.minikube/bin
	I0816 10:22:12.814372    4519 out.go:352] Setting JSON to false
	I0816 10:22:12.814395    4519 mustload.go:65] Loading cluster: ha-286000
	I0816 10:22:12.814452    4519 notify.go:220] Checking for updates...
	I0816 10:22:12.814733    4519 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:22:12.814750    4519 status.go:255] checking status of ha-286000 ...
	I0816 10:22:12.815131    4519 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:12.815184    4519 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:12.824335    4519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51952
	I0816 10:22:12.824720    4519 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:12.825155    4519 main.go:141] libmachine: Using API Version  1
	I0816 10:22:12.825183    4519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:12.825409    4519 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:12.825513    4519 main.go:141] libmachine: (ha-286000) Calling .GetState
	I0816 10:22:12.825596    4519 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:22:12.825673    4519 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:22:12.826702    4519 status.go:330] ha-286000 host status = "Running" (err=<nil>)
	I0816 10:22:12.826724    4519 host.go:66] Checking if "ha-286000" exists ...
	I0816 10:22:12.826970    4519 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:12.826993    4519 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:12.835490    4519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51954
	I0816 10:22:12.835817    4519 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:12.836185    4519 main.go:141] libmachine: Using API Version  1
	I0816 10:22:12.836210    4519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:12.836429    4519 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:12.836538    4519 main.go:141] libmachine: (ha-286000) Calling .GetIP
	I0816 10:22:12.836625    4519 host.go:66] Checking if "ha-286000" exists ...
	I0816 10:22:12.836879    4519 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:12.836905    4519 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:12.849656    4519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51956
	I0816 10:22:12.849999    4519 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:12.850331    4519 main.go:141] libmachine: Using API Version  1
	I0816 10:22:12.850344    4519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:12.850535    4519 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:12.850639    4519 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:22:12.850772    4519 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 10:22:12.850794    4519 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:22:12.850867    4519 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:22:12.850973    4519 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:22:12.851047    4519 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:22:12.851141    4519 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:22:12.884207    4519 ssh_runner.go:195] Run: systemctl --version
	I0816 10:22:12.888909    4519 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 10:22:12.901597    4519 kubeconfig.go:125] found "ha-286000" server: "https://192.169.0.254:8443"
	I0816 10:22:12.901620    4519 api_server.go:166] Checking apiserver status ...
	I0816 10:22:12.901663    4519 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 10:22:12.913463    4519 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/7508/cgroup
	W0816 10:22:12.922027    4519 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/7508/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0816 10:22:12.922090    4519 ssh_runner.go:195] Run: ls
	I0816 10:22:12.926271    4519 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0816 10:22:12.930763    4519 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0816 10:22:12.930775    4519 status.go:422] ha-286000 apiserver status = Running (err=<nil>)
	I0816 10:22:12.930784    4519 status.go:257] ha-286000 status: &{Name:ha-286000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 10:22:12.930795    4519 status.go:255] checking status of ha-286000-m02 ...
	I0816 10:22:12.931076    4519 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:12.931097    4519 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:12.939785    4519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51960
	I0816 10:22:12.940132    4519 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:12.940495    4519 main.go:141] libmachine: Using API Version  1
	I0816 10:22:12.940509    4519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:12.940704    4519 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:12.940808    4519 main.go:141] libmachine: (ha-286000-m02) Calling .GetState
	I0816 10:22:12.940891    4519 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:22:12.940963    4519 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid from json: 4408
	I0816 10:22:12.941955    4519 status.go:330] ha-286000-m02 host status = "Running" (err=<nil>)
	I0816 10:22:12.941965    4519 host.go:66] Checking if "ha-286000-m02" exists ...
	I0816 10:22:12.942211    4519 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:12.942231    4519 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:12.950640    4519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51962
	I0816 10:22:12.950983    4519 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:12.951337    4519 main.go:141] libmachine: Using API Version  1
	I0816 10:22:12.951352    4519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:12.951553    4519 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:12.951666    4519 main.go:141] libmachine: (ha-286000-m02) Calling .GetIP
	I0816 10:22:12.951767    4519 host.go:66] Checking if "ha-286000-m02" exists ...
	I0816 10:22:12.952048    4519 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:12.952082    4519 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:12.960618    4519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51964
	I0816 10:22:12.960956    4519 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:12.961307    4519 main.go:141] libmachine: Using API Version  1
	I0816 10:22:12.961318    4519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:12.961553    4519 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:12.961656    4519 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:22:12.961779    4519 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 10:22:12.961789    4519 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:22:12.961857    4519 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:22:12.961948    4519 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:22:12.962030    4519 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:22:12.962105    4519 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/id_rsa Username:docker}
	I0816 10:22:12.991667    4519 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 10:22:13.002904    4519 kubeconfig.go:125] found "ha-286000" server: "https://192.169.0.254:8443"
	I0816 10:22:13.002918    4519 api_server.go:166] Checking apiserver status ...
	I0816 10:22:13.002958    4519 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 10:22:13.013631    4519 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1970/cgroup
	W0816 10:22:13.020938    4519 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1970/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0816 10:22:13.020981    4519 ssh_runner.go:195] Run: ls
	I0816 10:22:13.024317    4519 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0816 10:22:13.027330    4519 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0816 10:22:13.027340    4519 status.go:422] ha-286000-m02 apiserver status = Running (err=<nil>)
	I0816 10:22:13.027348    4519 status.go:257] ha-286000-m02 status: &{Name:ha-286000-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 10:22:13.027358    4519 status.go:255] checking status of ha-286000-m03 ...
	I0816 10:22:13.027629    4519 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:13.027656    4519 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:13.036299    4519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51968
	I0816 10:22:13.036667    4519 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:13.037012    4519 main.go:141] libmachine: Using API Version  1
	I0816 10:22:13.037025    4519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:13.037284    4519 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:13.037393    4519 main.go:141] libmachine: (ha-286000-m03) Calling .GetState
	I0816 10:22:13.037478    4519 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:22:13.037557    4519 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid from json: 3849
	I0816 10:22:13.038584    4519 status.go:330] ha-286000-m03 host status = "Running" (err=<nil>)
	I0816 10:22:13.038594    4519 host.go:66] Checking if "ha-286000-m03" exists ...
	I0816 10:22:13.038854    4519 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:13.038884    4519 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:13.047616    4519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51970
	I0816 10:22:13.047948    4519 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:13.048301    4519 main.go:141] libmachine: Using API Version  1
	I0816 10:22:13.048318    4519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:13.048539    4519 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:13.048654    4519 main.go:141] libmachine: (ha-286000-m03) Calling .GetIP
	I0816 10:22:13.048747    4519 host.go:66] Checking if "ha-286000-m03" exists ...
	I0816 10:22:13.049016    4519 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:13.049042    4519 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:13.057728    4519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51972
	I0816 10:22:13.058087    4519 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:13.058432    4519 main.go:141] libmachine: Using API Version  1
	I0816 10:22:13.058446    4519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:13.058670    4519 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:13.058784    4519 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:22:13.058913    4519 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 10:22:13.058924    4519 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:22:13.059005    4519 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:22:13.059082    4519 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:22:13.059166    4519 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:22:13.059249    4519 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/id_rsa Username:docker}
	I0816 10:22:13.095748    4519 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 10:22:13.106013    4519 kubeconfig.go:125] found "ha-286000" server: "https://192.169.0.254:8443"
	I0816 10:22:13.106032    4519 api_server.go:166] Checking apiserver status ...
	I0816 10:22:13.106071    4519 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 10:22:13.115412    4519 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 10:22:13.115423    4519 status.go:422] ha-286000-m03 apiserver status = Stopped (err=<nil>)
	I0816 10:22:13.115432    4519 status.go:257] ha-286000-m03 status: &{Name:ha-286000-m03 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 10:22:13.115442    4519 status.go:255] checking status of ha-286000-m04 ...
	I0816 10:22:13.115721    4519 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:13.115757    4519 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:13.124401    4519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51975
	I0816 10:22:13.124790    4519 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:13.125139    4519 main.go:141] libmachine: Using API Version  1
	I0816 10:22:13.125150    4519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:13.125375    4519 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:13.125486    4519 main.go:141] libmachine: (ha-286000-m04) Calling .GetState
	I0816 10:22:13.125569    4519 main.go:141] libmachine: (ha-286000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:22:13.125642    4519 main.go:141] libmachine: (ha-286000-m04) DBG | hyperkit pid from json: 4248
	I0816 10:22:13.126646    4519 status.go:330] ha-286000-m04 host status = "Running" (err=<nil>)
	I0816 10:22:13.126655    4519 host.go:66] Checking if "ha-286000-m04" exists ...
	I0816 10:22:13.126904    4519 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:13.126927    4519 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:13.135567    4519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51977
	I0816 10:22:13.135912    4519 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:13.136280    4519 main.go:141] libmachine: Using API Version  1
	I0816 10:22:13.136298    4519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:13.136535    4519 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:13.136658    4519 main.go:141] libmachine: (ha-286000-m04) Calling .GetIP
	I0816 10:22:13.136754    4519 host.go:66] Checking if "ha-286000-m04" exists ...
	I0816 10:22:13.136994    4519 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:13.137017    4519 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:13.145577    4519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51979
	I0816 10:22:13.145923    4519 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:13.146292    4519 main.go:141] libmachine: Using API Version  1
	I0816 10:22:13.146308    4519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:13.146511    4519 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:13.146616    4519 main.go:141] libmachine: (ha-286000-m04) Calling .DriverName
	I0816 10:22:13.146740    4519 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 10:22:13.146751    4519 main.go:141] libmachine: (ha-286000-m04) Calling .GetSSHHostname
	I0816 10:22:13.146835    4519 main.go:141] libmachine: (ha-286000-m04) Calling .GetSSHPort
	I0816 10:22:13.146934    4519 main.go:141] libmachine: (ha-286000-m04) Calling .GetSSHKeyPath
	I0816 10:22:13.147024    4519 main.go:141] libmachine: (ha-286000-m04) Calling .GetSSHUsername
	I0816 10:22:13.147103    4519 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m04/id_rsa Username:docker}
	I0816 10:22:13.179440    4519 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 10:22:13.190751    4519 status.go:257] ha-286000-m04 status: &{Name:ha-286000-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-amd64 -p ha-286000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-286000 status -v=7 --alsologtostderr: exit status 2 (437.276907ms)

                                                
                                                
-- stdout --
	ha-286000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-286000-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-286000-m03
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-286000-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 10:22:17.211282    4533 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:22:17.211469    4533 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:22:17.211475    4533 out.go:358] Setting ErrFile to fd 2...
	I0816 10:22:17.211479    4533 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:22:17.211671    4533 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1276/.minikube/bin
	I0816 10:22:17.211851    4533 out.go:352] Setting JSON to false
	I0816 10:22:17.211873    4533 mustload.go:65] Loading cluster: ha-286000
	I0816 10:22:17.211915    4533 notify.go:220] Checking for updates...
	I0816 10:22:17.212164    4533 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:22:17.212182    4533 status.go:255] checking status of ha-286000 ...
	I0816 10:22:17.212566    4533 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:17.212613    4533 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:17.221760    4533 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51983
	I0816 10:22:17.222162    4533 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:17.222621    4533 main.go:141] libmachine: Using API Version  1
	I0816 10:22:17.222638    4533 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:17.222914    4533 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:17.223058    4533 main.go:141] libmachine: (ha-286000) Calling .GetState
	I0816 10:22:17.223162    4533 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:22:17.223222    4533 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:22:17.224244    4533 status.go:330] ha-286000 host status = "Running" (err=<nil>)
	I0816 10:22:17.224262    4533 host.go:66] Checking if "ha-286000" exists ...
	I0816 10:22:17.224513    4533 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:17.224576    4533 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:17.233321    4533 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51985
	I0816 10:22:17.233667    4533 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:17.233999    4533 main.go:141] libmachine: Using API Version  1
	I0816 10:22:17.234009    4533 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:17.234248    4533 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:17.234358    4533 main.go:141] libmachine: (ha-286000) Calling .GetIP
	I0816 10:22:17.234443    4533 host.go:66] Checking if "ha-286000" exists ...
	I0816 10:22:17.234701    4533 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:17.234728    4533 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:17.246257    4533 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51987
	I0816 10:22:17.246596    4533 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:17.246928    4533 main.go:141] libmachine: Using API Version  1
	I0816 10:22:17.246940    4533 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:17.247159    4533 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:17.247287    4533 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:22:17.247431    4533 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 10:22:17.247452    4533 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:22:17.247533    4533 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:22:17.247599    4533 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:22:17.247684    4533 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:22:17.247772    4533 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:22:17.280755    4533 ssh_runner.go:195] Run: systemctl --version
	I0816 10:22:17.285088    4533 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 10:22:17.302961    4533 kubeconfig.go:125] found "ha-286000" server: "https://192.169.0.254:8443"
	I0816 10:22:17.302991    4533 api_server.go:166] Checking apiserver status ...
	I0816 10:22:17.303031    4533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 10:22:17.314119    4533 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/7508/cgroup
	W0816 10:22:17.321405    4533 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/7508/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0816 10:22:17.321463    4533 ssh_runner.go:195] Run: ls
	I0816 10:22:17.324686    4533 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0816 10:22:17.327692    4533 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0816 10:22:17.327703    4533 status.go:422] ha-286000 apiserver status = Running (err=<nil>)
	I0816 10:22:17.327713    4533 status.go:257] ha-286000 status: &{Name:ha-286000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 10:22:17.327723    4533 status.go:255] checking status of ha-286000-m02 ...
	I0816 10:22:17.327955    4533 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:17.327975    4533 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:17.336851    4533 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51991
	I0816 10:22:17.337196    4533 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:17.337546    4533 main.go:141] libmachine: Using API Version  1
	I0816 10:22:17.337569    4533 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:17.337767    4533 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:17.337863    4533 main.go:141] libmachine: (ha-286000-m02) Calling .GetState
	I0816 10:22:17.337937    4533 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:22:17.338004    4533 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid from json: 4408
	I0816 10:22:17.339025    4533 status.go:330] ha-286000-m02 host status = "Running" (err=<nil>)
	I0816 10:22:17.339034    4533 host.go:66] Checking if "ha-286000-m02" exists ...
	I0816 10:22:17.339289    4533 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:17.339311    4533 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:17.348166    4533 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51993
	I0816 10:22:17.348543    4533 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:17.348886    4533 main.go:141] libmachine: Using API Version  1
	I0816 10:22:17.348895    4533 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:17.349141    4533 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:17.349258    4533 main.go:141] libmachine: (ha-286000-m02) Calling .GetIP
	I0816 10:22:17.349350    4533 host.go:66] Checking if "ha-286000-m02" exists ...
	I0816 10:22:17.349636    4533 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:17.349661    4533 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:17.358750    4533 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51995
	I0816 10:22:17.359110    4533 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:17.359430    4533 main.go:141] libmachine: Using API Version  1
	I0816 10:22:17.359438    4533 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:17.359661    4533 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:17.359775    4533 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:22:17.359916    4533 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 10:22:17.359928    4533 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:22:17.360013    4533 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:22:17.360088    4533 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:22:17.360196    4533 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:22:17.360294    4533 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/id_rsa Username:docker}
	I0816 10:22:17.389670    4533 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 10:22:17.401135    4533 kubeconfig.go:125] found "ha-286000" server: "https://192.169.0.254:8443"
	I0816 10:22:17.401150    4533 api_server.go:166] Checking apiserver status ...
	I0816 10:22:17.401201    4533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 10:22:17.412487    4533 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1970/cgroup
	W0816 10:22:17.420282    4533 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1970/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0816 10:22:17.420331    4533 ssh_runner.go:195] Run: ls
	I0816 10:22:17.423476    4533 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0816 10:22:17.426682    4533 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0816 10:22:17.426694    4533 status.go:422] ha-286000-m02 apiserver status = Running (err=<nil>)
	I0816 10:22:17.426703    4533 status.go:257] ha-286000-m02 status: &{Name:ha-286000-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 10:22:17.426716    4533 status.go:255] checking status of ha-286000-m03 ...
	I0816 10:22:17.426981    4533 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:17.427001    4533 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:17.435954    4533 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51999
	I0816 10:22:17.436309    4533 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:17.436647    4533 main.go:141] libmachine: Using API Version  1
	I0816 10:22:17.436669    4533 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:17.436905    4533 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:17.437021    4533 main.go:141] libmachine: (ha-286000-m03) Calling .GetState
	I0816 10:22:17.437105    4533 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:22:17.437189    4533 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid from json: 3849
	I0816 10:22:17.438201    4533 status.go:330] ha-286000-m03 host status = "Running" (err=<nil>)
	I0816 10:22:17.438211    4533 host.go:66] Checking if "ha-286000-m03" exists ...
	I0816 10:22:17.438457    4533 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:17.438486    4533 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:17.447025    4533 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52001
	I0816 10:22:17.447351    4533 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:17.447667    4533 main.go:141] libmachine: Using API Version  1
	I0816 10:22:17.447675    4533 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:17.447891    4533 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:17.447999    4533 main.go:141] libmachine: (ha-286000-m03) Calling .GetIP
	I0816 10:22:17.448091    4533 host.go:66] Checking if "ha-286000-m03" exists ...
	I0816 10:22:17.448336    4533 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:17.448377    4533 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:17.457033    4533 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52003
	I0816 10:22:17.457373    4533 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:17.457730    4533 main.go:141] libmachine: Using API Version  1
	I0816 10:22:17.457743    4533 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:17.457941    4533 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:17.458050    4533 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:22:17.458174    4533 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 10:22:17.458200    4533 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:22:17.458281    4533 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:22:17.458357    4533 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:22:17.458430    4533 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:22:17.458500    4533 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/id_rsa Username:docker}
	I0816 10:22:17.495748    4533 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 10:22:17.506454    4533 kubeconfig.go:125] found "ha-286000" server: "https://192.169.0.254:8443"
	I0816 10:22:17.506468    4533 api_server.go:166] Checking apiserver status ...
	I0816 10:22:17.506509    4533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 10:22:17.515882    4533 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 10:22:17.515893    4533 status.go:422] ha-286000-m03 apiserver status = Stopped (err=<nil>)
	I0816 10:22:17.515902    4533 status.go:257] ha-286000-m03 status: &{Name:ha-286000-m03 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 10:22:17.515912    4533 status.go:255] checking status of ha-286000-m04 ...
	I0816 10:22:17.516165    4533 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:17.516187    4533 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:17.525031    4533 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52006
	I0816 10:22:17.525410    4533 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:17.525757    4533 main.go:141] libmachine: Using API Version  1
	I0816 10:22:17.525771    4533 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:17.525973    4533 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:17.526098    4533 main.go:141] libmachine: (ha-286000-m04) Calling .GetState
	I0816 10:22:17.526177    4533 main.go:141] libmachine: (ha-286000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:22:17.526244    4533 main.go:141] libmachine: (ha-286000-m04) DBG | hyperkit pid from json: 4248
	I0816 10:22:17.527237    4533 status.go:330] ha-286000-m04 host status = "Running" (err=<nil>)
	I0816 10:22:17.527246    4533 host.go:66] Checking if "ha-286000-m04" exists ...
	I0816 10:22:17.527497    4533 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:17.527524    4533 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:17.536034    4533 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52008
	I0816 10:22:17.536352    4533 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:17.536666    4533 main.go:141] libmachine: Using API Version  1
	I0816 10:22:17.536685    4533 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:17.536880    4533 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:17.536988    4533 main.go:141] libmachine: (ha-286000-m04) Calling .GetIP
	I0816 10:22:17.537079    4533 host.go:66] Checking if "ha-286000-m04" exists ...
	I0816 10:22:17.537342    4533 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:17.537376    4533 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:17.545915    4533 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52010
	I0816 10:22:17.546259    4533 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:17.546571    4533 main.go:141] libmachine: Using API Version  1
	I0816 10:22:17.546586    4533 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:17.546794    4533 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:17.546897    4533 main.go:141] libmachine: (ha-286000-m04) Calling .DriverName
	I0816 10:22:17.547014    4533 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 10:22:17.547024    4533 main.go:141] libmachine: (ha-286000-m04) Calling .GetSSHHostname
	I0816 10:22:17.547101    4533 main.go:141] libmachine: (ha-286000-m04) Calling .GetSSHPort
	I0816 10:22:17.547183    4533 main.go:141] libmachine: (ha-286000-m04) Calling .GetSSHKeyPath
	I0816 10:22:17.547259    4533 main.go:141] libmachine: (ha-286000-m04) Calling .GetSSHUsername
	I0816 10:22:17.547324    4533 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m04/id_rsa Username:docker}
	I0816 10:22:17.579196    4533 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 10:22:17.590259    4533 status.go:257] ha-286000-m04 status: &{Name:ha-286000-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-amd64 -p ha-286000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-286000 status -v=7 --alsologtostderr: exit status 2 (431.743684ms)

                                                
                                                
-- stdout --
	ha-286000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-286000-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-286000-m03
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-286000-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 10:22:23.411508    4548 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:22:23.411786    4548 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:22:23.411791    4548 out.go:358] Setting ErrFile to fd 2...
	I0816 10:22:23.411795    4548 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:22:23.411968    4548 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1276/.minikube/bin
	I0816 10:22:23.412151    4548 out.go:352] Setting JSON to false
	I0816 10:22:23.412173    4548 mustload.go:65] Loading cluster: ha-286000
	I0816 10:22:23.412216    4548 notify.go:220] Checking for updates...
	I0816 10:22:23.412486    4548 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:22:23.412506    4548 status.go:255] checking status of ha-286000 ...
	I0816 10:22:23.412915    4548 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:23.412958    4548 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:23.422057    4548 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52014
	I0816 10:22:23.422435    4548 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:23.422838    4548 main.go:141] libmachine: Using API Version  1
	I0816 10:22:23.422847    4548 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:23.423079    4548 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:23.423183    4548 main.go:141] libmachine: (ha-286000) Calling .GetState
	I0816 10:22:23.423268    4548 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:22:23.423344    4548 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:22:23.424333    4548 status.go:330] ha-286000 host status = "Running" (err=<nil>)
	I0816 10:22:23.424353    4548 host.go:66] Checking if "ha-286000" exists ...
	I0816 10:22:23.424591    4548 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:23.424613    4548 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:23.433095    4548 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52016
	I0816 10:22:23.433425    4548 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:23.433810    4548 main.go:141] libmachine: Using API Version  1
	I0816 10:22:23.433838    4548 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:23.434044    4548 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:23.434142    4548 main.go:141] libmachine: (ha-286000) Calling .GetIP
	I0816 10:22:23.434228    4548 host.go:66] Checking if "ha-286000" exists ...
	I0816 10:22:23.434511    4548 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:23.434542    4548 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:23.446173    4548 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52018
	I0816 10:22:23.446522    4548 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:23.446845    4548 main.go:141] libmachine: Using API Version  1
	I0816 10:22:23.446854    4548 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:23.447051    4548 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:23.447140    4548 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:22:23.447261    4548 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 10:22:23.447282    4548 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:22:23.447370    4548 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:22:23.447455    4548 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:22:23.447541    4548 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:22:23.447622    4548 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:22:23.479846    4548 ssh_runner.go:195] Run: systemctl --version
	I0816 10:22:23.484131    4548 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 10:22:23.495351    4548 kubeconfig.go:125] found "ha-286000" server: "https://192.169.0.254:8443"
	I0816 10:22:23.495375    4548 api_server.go:166] Checking apiserver status ...
	I0816 10:22:23.495410    4548 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 10:22:23.508081    4548 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/7508/cgroup
	W0816 10:22:23.516468    4548 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/7508/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0816 10:22:23.516529    4548 ssh_runner.go:195] Run: ls
	I0816 10:22:23.519733    4548 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0816 10:22:23.522778    4548 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0816 10:22:23.522789    4548 status.go:422] ha-286000 apiserver status = Running (err=<nil>)
	I0816 10:22:23.522805    4548 status.go:257] ha-286000 status: &{Name:ha-286000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 10:22:23.522817    4548 status.go:255] checking status of ha-286000-m02 ...
	I0816 10:22:23.523058    4548 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:23.523079    4548 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:23.531888    4548 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52022
	I0816 10:22:23.532226    4548 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:23.532516    4548 main.go:141] libmachine: Using API Version  1
	I0816 10:22:23.532525    4548 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:23.532723    4548 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:23.532831    4548 main.go:141] libmachine: (ha-286000-m02) Calling .GetState
	I0816 10:22:23.532909    4548 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:22:23.532979    4548 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid from json: 4408
	I0816 10:22:23.533959    4548 status.go:330] ha-286000-m02 host status = "Running" (err=<nil>)
	I0816 10:22:23.533968    4548 host.go:66] Checking if "ha-286000-m02" exists ...
	I0816 10:22:23.534212    4548 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:23.534243    4548 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:23.542857    4548 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52024
	I0816 10:22:23.543216    4548 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:23.543526    4548 main.go:141] libmachine: Using API Version  1
	I0816 10:22:23.543547    4548 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:23.543749    4548 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:23.543857    4548 main.go:141] libmachine: (ha-286000-m02) Calling .GetIP
	I0816 10:22:23.543935    4548 host.go:66] Checking if "ha-286000-m02" exists ...
	I0816 10:22:23.544180    4548 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:23.544203    4548 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:23.552625    4548 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52026
	I0816 10:22:23.552965    4548 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:23.553316    4548 main.go:141] libmachine: Using API Version  1
	I0816 10:22:23.553330    4548 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:23.553541    4548 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:23.553669    4548 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:22:23.553811    4548 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 10:22:23.553822    4548 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:22:23.553906    4548 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:22:23.553993    4548 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:22:23.554112    4548 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:22:23.554210    4548 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/id_rsa Username:docker}
	I0816 10:22:23.582637    4548 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 10:22:23.593746    4548 kubeconfig.go:125] found "ha-286000" server: "https://192.169.0.254:8443"
	I0816 10:22:23.593759    4548 api_server.go:166] Checking apiserver status ...
	I0816 10:22:23.593794    4548 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 10:22:23.605994    4548 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1970/cgroup
	W0816 10:22:23.613346    4548 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1970/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0816 10:22:23.613395    4548 ssh_runner.go:195] Run: ls
	I0816 10:22:23.617306    4548 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0816 10:22:23.620568    4548 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0816 10:22:23.620580    4548 status.go:422] ha-286000-m02 apiserver status = Running (err=<nil>)
	I0816 10:22:23.620588    4548 status.go:257] ha-286000-m02 status: &{Name:ha-286000-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 10:22:23.620603    4548 status.go:255] checking status of ha-286000-m03 ...
	I0816 10:22:23.620870    4548 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:23.620890    4548 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:23.629539    4548 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52030
	I0816 10:22:23.629885    4548 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:23.630238    4548 main.go:141] libmachine: Using API Version  1
	I0816 10:22:23.630252    4548 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:23.630463    4548 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:23.630578    4548 main.go:141] libmachine: (ha-286000-m03) Calling .GetState
	I0816 10:22:23.630665    4548 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:22:23.630744    4548 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid from json: 3849
	I0816 10:22:23.631759    4548 status.go:330] ha-286000-m03 host status = "Running" (err=<nil>)
	I0816 10:22:23.631769    4548 host.go:66] Checking if "ha-286000-m03" exists ...
	I0816 10:22:23.632016    4548 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:23.632043    4548 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:23.640621    4548 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52032
	I0816 10:22:23.640976    4548 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:23.641319    4548 main.go:141] libmachine: Using API Version  1
	I0816 10:22:23.641332    4548 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:23.641537    4548 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:23.641654    4548 main.go:141] libmachine: (ha-286000-m03) Calling .GetIP
	I0816 10:22:23.641735    4548 host.go:66] Checking if "ha-286000-m03" exists ...
	I0816 10:22:23.641969    4548 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:23.641990    4548 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:23.650497    4548 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52034
	I0816 10:22:23.650845    4548 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:23.651202    4548 main.go:141] libmachine: Using API Version  1
	I0816 10:22:23.651220    4548 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:23.651405    4548 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:23.651540    4548 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:22:23.651684    4548 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 10:22:23.651698    4548 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:22:23.651780    4548 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:22:23.651850    4548 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:22:23.651933    4548 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:22:23.652009    4548 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/id_rsa Username:docker}
	I0816 10:22:23.689951    4548 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 10:22:23.700290    4548 kubeconfig.go:125] found "ha-286000" server: "https://192.169.0.254:8443"
	I0816 10:22:23.700303    4548 api_server.go:166] Checking apiserver status ...
	I0816 10:22:23.700341    4548 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 10:22:23.709782    4548 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 10:22:23.709795    4548 status.go:422] ha-286000-m03 apiserver status = Stopped (err=<nil>)
	I0816 10:22:23.709805    4548 status.go:257] ha-286000-m03 status: &{Name:ha-286000-m03 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 10:22:23.709815    4548 status.go:255] checking status of ha-286000-m04 ...
	I0816 10:22:23.710094    4548 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:23.710119    4548 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:23.719005    4548 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52037
	I0816 10:22:23.719360    4548 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:23.719707    4548 main.go:141] libmachine: Using API Version  1
	I0816 10:22:23.719721    4548 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:23.719931    4548 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:23.720041    4548 main.go:141] libmachine: (ha-286000-m04) Calling .GetState
	I0816 10:22:23.720119    4548 main.go:141] libmachine: (ha-286000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:22:23.720196    4548 main.go:141] libmachine: (ha-286000-m04) DBG | hyperkit pid from json: 4248
	I0816 10:22:23.721201    4548 status.go:330] ha-286000-m04 host status = "Running" (err=<nil>)
	I0816 10:22:23.721211    4548 host.go:66] Checking if "ha-286000-m04" exists ...
	I0816 10:22:23.721499    4548 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:23.721523    4548 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:23.730434    4548 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52039
	I0816 10:22:23.730782    4548 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:23.731126    4548 main.go:141] libmachine: Using API Version  1
	I0816 10:22:23.731141    4548 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:23.731340    4548 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:23.731446    4548 main.go:141] libmachine: (ha-286000-m04) Calling .GetIP
	I0816 10:22:23.731525    4548 host.go:66] Checking if "ha-286000-m04" exists ...
	I0816 10:22:23.731801    4548 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:23.731828    4548 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:23.740657    4548 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52041
	I0816 10:22:23.741023    4548 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:23.741353    4548 main.go:141] libmachine: Using API Version  1
	I0816 10:22:23.741372    4548 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:23.741568    4548 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:23.741663    4548 main.go:141] libmachine: (ha-286000-m04) Calling .DriverName
	I0816 10:22:23.741781    4548 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 10:22:23.741792    4548 main.go:141] libmachine: (ha-286000-m04) Calling .GetSSHHostname
	I0816 10:22:23.741872    4548 main.go:141] libmachine: (ha-286000-m04) Calling .GetSSHPort
	I0816 10:22:23.741947    4548 main.go:141] libmachine: (ha-286000-m04) Calling .GetSSHKeyPath
	I0816 10:22:23.742036    4548 main.go:141] libmachine: (ha-286000-m04) Calling .GetSSHUsername
	I0816 10:22:23.742118    4548 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m04/id_rsa Username:docker}
	I0816 10:22:23.774453    4548 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 10:22:23.785617    4548 status.go:257] ha-286000-m04 status: &{Name:ha-286000-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-amd64 -p ha-286000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-286000 status -v=7 --alsologtostderr: exit status 2 (428.793184ms)

                                                
                                                
-- stdout --
	ha-286000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-286000-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-286000-m03
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-286000-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 10:22:40.190986    4572 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:22:40.191283    4572 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:22:40.191289    4572 out.go:358] Setting ErrFile to fd 2...
	I0816 10:22:40.191293    4572 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:22:40.191478    4572 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1276/.minikube/bin
	I0816 10:22:40.191668    4572 out.go:352] Setting JSON to false
	I0816 10:22:40.191690    4572 mustload.go:65] Loading cluster: ha-286000
	I0816 10:22:40.191731    4572 notify.go:220] Checking for updates...
	I0816 10:22:40.192013    4572 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:22:40.192029    4572 status.go:255] checking status of ha-286000 ...
	I0816 10:22:40.192416    4572 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:40.192463    4572 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:40.201442    4572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52045
	I0816 10:22:40.201796    4572 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:40.202218    4572 main.go:141] libmachine: Using API Version  1
	I0816 10:22:40.202227    4572 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:40.202410    4572 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:40.202521    4572 main.go:141] libmachine: (ha-286000) Calling .GetState
	I0816 10:22:40.202596    4572 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:22:40.202679    4572 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:22:40.203632    4572 status.go:330] ha-286000 host status = "Running" (err=<nil>)
	I0816 10:22:40.203654    4572 host.go:66] Checking if "ha-286000" exists ...
	I0816 10:22:40.203896    4572 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:40.203917    4572 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:40.212304    4572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52047
	I0816 10:22:40.212606    4572 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:40.212992    4572 main.go:141] libmachine: Using API Version  1
	I0816 10:22:40.213016    4572 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:40.213217    4572 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:40.213318    4572 main.go:141] libmachine: (ha-286000) Calling .GetIP
	I0816 10:22:40.213395    4572 host.go:66] Checking if "ha-286000" exists ...
	I0816 10:22:40.213637    4572 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:40.213664    4572 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:40.224770    4572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52049
	I0816 10:22:40.225129    4572 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:40.225448    4572 main.go:141] libmachine: Using API Version  1
	I0816 10:22:40.225458    4572 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:40.225659    4572 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:40.225755    4572 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:22:40.225879    4572 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 10:22:40.225901    4572 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:22:40.225980    4572 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:22:40.226059    4572 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:22:40.226141    4572 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:22:40.226222    4572 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:22:40.259351    4572 ssh_runner.go:195] Run: systemctl --version
	I0816 10:22:40.263908    4572 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 10:22:40.274717    4572 kubeconfig.go:125] found "ha-286000" server: "https://192.169.0.254:8443"
	I0816 10:22:40.274746    4572 api_server.go:166] Checking apiserver status ...
	I0816 10:22:40.274790    4572 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 10:22:40.285780    4572 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/7508/cgroup
	W0816 10:22:40.293614    4572 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/7508/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0816 10:22:40.293661    4572 ssh_runner.go:195] Run: ls
	I0816 10:22:40.296837    4572 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0816 10:22:40.299954    4572 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0816 10:22:40.299965    4572 status.go:422] ha-286000 apiserver status = Running (err=<nil>)
	I0816 10:22:40.299974    4572 status.go:257] ha-286000 status: &{Name:ha-286000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 10:22:40.299989    4572 status.go:255] checking status of ha-286000-m02 ...
	I0816 10:22:40.300250    4572 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:40.300272    4572 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:40.308981    4572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52053
	I0816 10:22:40.309340    4572 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:40.309659    4572 main.go:141] libmachine: Using API Version  1
	I0816 10:22:40.309670    4572 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:40.309894    4572 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:40.310002    4572 main.go:141] libmachine: (ha-286000-m02) Calling .GetState
	I0816 10:22:40.310087    4572 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:22:40.310172    4572 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid from json: 4408
	I0816 10:22:40.311087    4572 status.go:330] ha-286000-m02 host status = "Running" (err=<nil>)
	I0816 10:22:40.311094    4572 host.go:66] Checking if "ha-286000-m02" exists ...
	I0816 10:22:40.311339    4572 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:40.311360    4572 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:40.319982    4572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52055
	I0816 10:22:40.320346    4572 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:40.320692    4572 main.go:141] libmachine: Using API Version  1
	I0816 10:22:40.320710    4572 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:40.320924    4572 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:40.321028    4572 main.go:141] libmachine: (ha-286000-m02) Calling .GetIP
	I0816 10:22:40.321122    4572 host.go:66] Checking if "ha-286000-m02" exists ...
	I0816 10:22:40.321384    4572 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:40.321407    4572 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:40.329896    4572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52057
	I0816 10:22:40.330251    4572 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:40.330567    4572 main.go:141] libmachine: Using API Version  1
	I0816 10:22:40.330575    4572 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:40.330795    4572 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:40.330902    4572 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:22:40.331026    4572 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 10:22:40.331037    4572 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:22:40.331117    4572 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:22:40.331214    4572 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:22:40.331294    4572 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:22:40.331363    4572 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/id_rsa Username:docker}
	I0816 10:22:40.361121    4572 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 10:22:40.372310    4572 kubeconfig.go:125] found "ha-286000" server: "https://192.169.0.254:8443"
	I0816 10:22:40.372325    4572 api_server.go:166] Checking apiserver status ...
	I0816 10:22:40.372363    4572 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 10:22:40.383824    4572 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1970/cgroup
	W0816 10:22:40.391095    4572 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1970/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0816 10:22:40.391146    4572 ssh_runner.go:195] Run: ls
	I0816 10:22:40.396007    4572 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0816 10:22:40.399376    4572 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0816 10:22:40.399387    4572 status.go:422] ha-286000-m02 apiserver status = Running (err=<nil>)
	I0816 10:22:40.399395    4572 status.go:257] ha-286000-m02 status: &{Name:ha-286000-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 10:22:40.399406    4572 status.go:255] checking status of ha-286000-m03 ...
	I0816 10:22:40.399671    4572 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:40.399709    4572 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:40.408350    4572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52061
	I0816 10:22:40.408717    4572 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:40.409074    4572 main.go:141] libmachine: Using API Version  1
	I0816 10:22:40.409088    4572 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:40.409314    4572 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:40.409414    4572 main.go:141] libmachine: (ha-286000-m03) Calling .GetState
	I0816 10:22:40.409501    4572 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:22:40.409575    4572 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid from json: 3849
	I0816 10:22:40.410522    4572 status.go:330] ha-286000-m03 host status = "Running" (err=<nil>)
	I0816 10:22:40.410531    4572 host.go:66] Checking if "ha-286000-m03" exists ...
	I0816 10:22:40.410798    4572 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:40.410831    4572 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:40.419393    4572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52063
	I0816 10:22:40.419721    4572 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:40.420033    4572 main.go:141] libmachine: Using API Version  1
	I0816 10:22:40.420044    4572 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:40.420264    4572 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:40.420391    4572 main.go:141] libmachine: (ha-286000-m03) Calling .GetIP
	I0816 10:22:40.420484    4572 host.go:66] Checking if "ha-286000-m03" exists ...
	I0816 10:22:40.420756    4572 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:40.420779    4572 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:40.429294    4572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52065
	I0816 10:22:40.429630    4572 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:40.429983    4572 main.go:141] libmachine: Using API Version  1
	I0816 10:22:40.429996    4572 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:40.430214    4572 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:40.430327    4572 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:22:40.430472    4572 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 10:22:40.430485    4572 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:22:40.430560    4572 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:22:40.430635    4572 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:22:40.430733    4572 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:22:40.430816    4572 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/id_rsa Username:docker}
	I0816 10:22:40.466863    4572 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 10:22:40.477248    4572 kubeconfig.go:125] found "ha-286000" server: "https://192.169.0.254:8443"
	I0816 10:22:40.477261    4572 api_server.go:166] Checking apiserver status ...
	I0816 10:22:40.477297    4572 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 10:22:40.486653    4572 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 10:22:40.486665    4572 status.go:422] ha-286000-m03 apiserver status = Stopped (err=<nil>)
	I0816 10:22:40.486676    4572 status.go:257] ha-286000-m03 status: &{Name:ha-286000-m03 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 10:22:40.486686    4572 status.go:255] checking status of ha-286000-m04 ...
	I0816 10:22:40.486952    4572 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:40.486976    4572 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:40.495713    4572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52068
	I0816 10:22:40.496042    4572 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:40.496389    4572 main.go:141] libmachine: Using API Version  1
	I0816 10:22:40.496408    4572 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:40.496635    4572 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:40.496730    4572 main.go:141] libmachine: (ha-286000-m04) Calling .GetState
	I0816 10:22:40.496815    4572 main.go:141] libmachine: (ha-286000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:22:40.496895    4572 main.go:141] libmachine: (ha-286000-m04) DBG | hyperkit pid from json: 4248
	I0816 10:22:40.497864    4572 status.go:330] ha-286000-m04 host status = "Running" (err=<nil>)
	I0816 10:22:40.497873    4572 host.go:66] Checking if "ha-286000-m04" exists ...
	I0816 10:22:40.498124    4572 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:40.498146    4572 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:40.506835    4572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52070
	I0816 10:22:40.507193    4572 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:40.507553    4572 main.go:141] libmachine: Using API Version  1
	I0816 10:22:40.507570    4572 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:40.507799    4572 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:40.507916    4572 main.go:141] libmachine: (ha-286000-m04) Calling .GetIP
	I0816 10:22:40.508005    4572 host.go:66] Checking if "ha-286000-m04" exists ...
	I0816 10:22:40.508250    4572 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:40.508283    4572 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:40.516769    4572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52072
	I0816 10:22:40.517121    4572 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:40.517480    4572 main.go:141] libmachine: Using API Version  1
	I0816 10:22:40.517495    4572 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:40.517697    4572 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:40.517786    4572 main.go:141] libmachine: (ha-286000-m04) Calling .DriverName
	I0816 10:22:40.517908    4572 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 10:22:40.517920    4572 main.go:141] libmachine: (ha-286000-m04) Calling .GetSSHHostname
	I0816 10:22:40.518003    4572 main.go:141] libmachine: (ha-286000-m04) Calling .GetSSHPort
	I0816 10:22:40.518077    4572 main.go:141] libmachine: (ha-286000-m04) Calling .GetSSHKeyPath
	I0816 10:22:40.518191    4572 main.go:141] libmachine: (ha-286000-m04) Calling .GetSSHUsername
	I0816 10:22:40.518274    4572 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m04/id_rsa Username:docker}
	I0816 10:22:40.550703    4572 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 10:22:40.561925    4572 status.go:257] ha-286000-m04 status: &{Name:ha-286000-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-amd64 -p ha-286000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-286000 status -v=7 --alsologtostderr: exit status 2 (437.133221ms)

                                                
                                                
-- stdout --
	ha-286000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-286000-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-286000-m03
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-286000-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 10:22:54.651486    4597 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:22:54.651683    4597 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:22:54.651688    4597 out.go:358] Setting ErrFile to fd 2...
	I0816 10:22:54.651692    4597 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:22:54.651879    4597 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1276/.minikube/bin
	I0816 10:22:54.652058    4597 out.go:352] Setting JSON to false
	I0816 10:22:54.652080    4597 mustload.go:65] Loading cluster: ha-286000
	I0816 10:22:54.652117    4597 notify.go:220] Checking for updates...
	I0816 10:22:54.652390    4597 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:22:54.652407    4597 status.go:255] checking status of ha-286000 ...
	I0816 10:22:54.652781    4597 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:54.652827    4597 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:54.662184    4597 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52076
	I0816 10:22:54.662586    4597 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:54.663019    4597 main.go:141] libmachine: Using API Version  1
	I0816 10:22:54.663028    4597 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:54.663284    4597 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:54.663403    4597 main.go:141] libmachine: (ha-286000) Calling .GetState
	I0816 10:22:54.663500    4597 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:22:54.663573    4597 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:22:54.664529    4597 status.go:330] ha-286000 host status = "Running" (err=<nil>)
	I0816 10:22:54.664550    4597 host.go:66] Checking if "ha-286000" exists ...
	I0816 10:22:54.664788    4597 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:54.664823    4597 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:54.673219    4597 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52078
	I0816 10:22:54.673561    4597 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:54.673918    4597 main.go:141] libmachine: Using API Version  1
	I0816 10:22:54.673931    4597 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:54.674144    4597 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:54.674247    4597 main.go:141] libmachine: (ha-286000) Calling .GetIP
	I0816 10:22:54.674333    4597 host.go:66] Checking if "ha-286000" exists ...
	I0816 10:22:54.674572    4597 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:54.674594    4597 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:54.687237    4597 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52080
	I0816 10:22:54.687617    4597 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:54.687966    4597 main.go:141] libmachine: Using API Version  1
	I0816 10:22:54.687980    4597 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:54.688187    4597 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:54.688295    4597 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:22:54.688434    4597 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 10:22:54.688464    4597 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:22:54.688571    4597 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:22:54.688655    4597 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:22:54.688737    4597 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:22:54.688813    4597 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:22:54.723863    4597 ssh_runner.go:195] Run: systemctl --version
	I0816 10:22:54.728326    4597 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 10:22:54.741977    4597 kubeconfig.go:125] found "ha-286000" server: "https://192.169.0.254:8443"
	I0816 10:22:54.742000    4597 api_server.go:166] Checking apiserver status ...
	I0816 10:22:54.742040    4597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 10:22:54.754020    4597 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/7508/cgroup
	W0816 10:22:54.761790    4597 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/7508/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0816 10:22:54.761833    4597 ssh_runner.go:195] Run: ls
	I0816 10:22:54.764902    4597 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0816 10:22:54.768032    4597 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0816 10:22:54.768043    4597 status.go:422] ha-286000 apiserver status = Running (err=<nil>)
	I0816 10:22:54.768060    4597 status.go:257] ha-286000 status: &{Name:ha-286000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 10:22:54.768072    4597 status.go:255] checking status of ha-286000-m02 ...
	I0816 10:22:54.768343    4597 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:54.768362    4597 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:54.777127    4597 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52084
	I0816 10:22:54.777461    4597 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:54.777836    4597 main.go:141] libmachine: Using API Version  1
	I0816 10:22:54.777853    4597 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:54.778072    4597 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:54.778185    4597 main.go:141] libmachine: (ha-286000-m02) Calling .GetState
	I0816 10:22:54.778270    4597 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:22:54.778345    4597 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid from json: 4408
	I0816 10:22:54.779325    4597 status.go:330] ha-286000-m02 host status = "Running" (err=<nil>)
	I0816 10:22:54.779334    4597 host.go:66] Checking if "ha-286000-m02" exists ...
	I0816 10:22:54.779584    4597 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:54.779606    4597 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:54.788301    4597 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52086
	I0816 10:22:54.788679    4597 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:54.789015    4597 main.go:141] libmachine: Using API Version  1
	I0816 10:22:54.789029    4597 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:54.789232    4597 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:54.789342    4597 main.go:141] libmachine: (ha-286000-m02) Calling .GetIP
	I0816 10:22:54.789441    4597 host.go:66] Checking if "ha-286000-m02" exists ...
	I0816 10:22:54.789700    4597 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:54.789732    4597 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:54.798467    4597 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52088
	I0816 10:22:54.798835    4597 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:54.799165    4597 main.go:141] libmachine: Using API Version  1
	I0816 10:22:54.799174    4597 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:54.799394    4597 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:54.799508    4597 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:22:54.799642    4597 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 10:22:54.799652    4597 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:22:54.799726    4597 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:22:54.799815    4597 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:22:54.799912    4597 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:22:54.800015    4597 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/id_rsa Username:docker}
	I0816 10:22:54.829334    4597 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 10:22:54.840962    4597 kubeconfig.go:125] found "ha-286000" server: "https://192.169.0.254:8443"
	I0816 10:22:54.840975    4597 api_server.go:166] Checking apiserver status ...
	I0816 10:22:54.841014    4597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 10:22:54.851774    4597 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1970/cgroup
	W0816 10:22:54.859437    4597 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1970/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0816 10:22:54.859479    4597 ssh_runner.go:195] Run: ls
	I0816 10:22:54.862603    4597 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0816 10:22:54.865629    4597 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0816 10:22:54.865641    4597 status.go:422] ha-286000-m02 apiserver status = Running (err=<nil>)
	I0816 10:22:54.865652    4597 status.go:257] ha-286000-m02 status: &{Name:ha-286000-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 10:22:54.865661    4597 status.go:255] checking status of ha-286000-m03 ...
	I0816 10:22:54.865916    4597 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:54.865943    4597 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:54.874711    4597 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52092
	I0816 10:22:54.875082    4597 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:54.875446    4597 main.go:141] libmachine: Using API Version  1
	I0816 10:22:54.875460    4597 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:54.875681    4597 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:54.875787    4597 main.go:141] libmachine: (ha-286000-m03) Calling .GetState
	I0816 10:22:54.875864    4597 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:22:54.875949    4597 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid from json: 3849
	I0816 10:22:54.876933    4597 status.go:330] ha-286000-m03 host status = "Running" (err=<nil>)
	I0816 10:22:54.876943    4597 host.go:66] Checking if "ha-286000-m03" exists ...
	I0816 10:22:54.877193    4597 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:54.877221    4597 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:54.885845    4597 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52094
	I0816 10:22:54.886187    4597 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:54.886547    4597 main.go:141] libmachine: Using API Version  1
	I0816 10:22:54.886564    4597 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:54.886823    4597 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:54.886946    4597 main.go:141] libmachine: (ha-286000-m03) Calling .GetIP
	I0816 10:22:54.887032    4597 host.go:66] Checking if "ha-286000-m03" exists ...
	I0816 10:22:54.887303    4597 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:54.887328    4597 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:54.896124    4597 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52096
	I0816 10:22:54.896503    4597 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:54.896895    4597 main.go:141] libmachine: Using API Version  1
	I0816 10:22:54.896911    4597 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:54.897117    4597 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:54.897225    4597 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:22:54.897360    4597 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 10:22:54.897373    4597 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:22:54.897451    4597 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:22:54.897531    4597 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:22:54.897622    4597 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:22:54.897709    4597 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/id_rsa Username:docker}
	I0816 10:22:54.933864    4597 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 10:22:54.944129    4597 kubeconfig.go:125] found "ha-286000" server: "https://192.169.0.254:8443"
	I0816 10:22:54.944156    4597 api_server.go:166] Checking apiserver status ...
	I0816 10:22:54.944195    4597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 10:22:54.953720    4597 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 10:22:54.953730    4597 status.go:422] ha-286000-m03 apiserver status = Stopped (err=<nil>)
	I0816 10:22:54.953740    4597 status.go:257] ha-286000-m03 status: &{Name:ha-286000-m03 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 10:22:54.953750    4597 status.go:255] checking status of ha-286000-m04 ...
	I0816 10:22:54.954003    4597 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:54.954025    4597 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:54.962877    4597 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52099
	I0816 10:22:54.963235    4597 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:54.963606    4597 main.go:141] libmachine: Using API Version  1
	I0816 10:22:54.963621    4597 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:54.963840    4597 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:54.963967    4597 main.go:141] libmachine: (ha-286000-m04) Calling .GetState
	I0816 10:22:54.964076    4597 main.go:141] libmachine: (ha-286000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:22:54.964145    4597 main.go:141] libmachine: (ha-286000-m04) DBG | hyperkit pid from json: 4248
	I0816 10:22:54.965163    4597 status.go:330] ha-286000-m04 host status = "Running" (err=<nil>)
	I0816 10:22:54.965176    4597 host.go:66] Checking if "ha-286000-m04" exists ...
	I0816 10:22:54.965452    4597 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:54.965477    4597 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:54.974471    4597 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52101
	I0816 10:22:54.974809    4597 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:54.975172    4597 main.go:141] libmachine: Using API Version  1
	I0816 10:22:54.975193    4597 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:54.975389    4597 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:54.975485    4597 main.go:141] libmachine: (ha-286000-m04) Calling .GetIP
	I0816 10:22:54.975571    4597 host.go:66] Checking if "ha-286000-m04" exists ...
	I0816 10:22:54.975817    4597 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:22:54.975841    4597 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:22:54.984346    4597 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52103
	I0816 10:22:54.984689    4597 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:22:54.985049    4597 main.go:141] libmachine: Using API Version  1
	I0816 10:22:54.985063    4597 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:22:54.985326    4597 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:22:54.985470    4597 main.go:141] libmachine: (ha-286000-m04) Calling .DriverName
	I0816 10:22:54.985621    4597 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 10:22:54.985633    4597 main.go:141] libmachine: (ha-286000-m04) Calling .GetSSHHostname
	I0816 10:22:54.985723    4597 main.go:141] libmachine: (ha-286000-m04) Calling .GetSSHPort
	I0816 10:22:54.985795    4597 main.go:141] libmachine: (ha-286000-m04) Calling .GetSSHKeyPath
	I0816 10:22:54.985944    4597 main.go:141] libmachine: (ha-286000-m04) Calling .GetSSHUsername
	I0816 10:22:54.986035    4597 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m04/id_rsa Username:docker}
	I0816 10:22:55.018602    4597 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 10:22:55.029922    4597 status.go:257] ha-286000-m04 status: &{Name:ha-286000-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-amd64 -p ha-286000 status -v=7 --alsologtostderr" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-286000 -n ha-286000
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-286000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-286000 logs -n 25: (2.315686419s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| kubectl | -p ha-286000 -- get pods -o          | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:15 PDT | 16 Aug 24 10:15 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- get pods -o          | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:15 PDT | 16 Aug 24 10:15 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- get pods -o          | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:15 PDT | 16 Aug 24 10:15 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- get pods -o          | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:15 PDT | 16 Aug 24 10:15 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- get pods -o          | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- get pods -o          | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- get pods -o          | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT |                     |
	|         | busybox-7dff88458-99xmp --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-dvmvk --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-k9m92 --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT |                     |
	|         | busybox-7dff88458-99xmp --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-dvmvk --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-k9m92 --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT |                     |
	|         | busybox-7dff88458-99xmp -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-dvmvk -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-k9m92 -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- get pods -o          | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT |                     |
	|         | busybox-7dff88458-99xmp              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-dvmvk              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-dvmvk -- sh        |           |         |         |                     |                     |
	|         | -c ping -c 1 192.169.0.1             |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-k9m92              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-k9m92 -- sh        |           |         |         |                     |                     |
	|         | -c ping -c 1 192.169.0.1             |           |         |         |                     |                     |
	| node    | add -p ha-286000 -v=7                | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:17 PDT | 16 Aug 24 10:17 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | ha-286000 node stop m02 -v=7         | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:17 PDT | 16 Aug 24 10:18 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | ha-286000 node start m02 -v=7        | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:20 PDT | 16 Aug 24 10:22 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 10:01:48
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 10:01:48.113296    3758 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:01:48.113462    3758 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:01:48.113467    3758 out.go:358] Setting ErrFile to fd 2...
	I0816 10:01:48.113470    3758 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:01:48.113648    3758 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1276/.minikube/bin
	I0816 10:01:48.115192    3758 out.go:352] Setting JSON to false
	I0816 10:01:48.140204    3758 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":1878,"bootTime":1723825830,"procs":433,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0816 10:01:48.140316    3758 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 10:01:48.194498    3758 out.go:177] * [ha-286000] minikube v1.33.1 on Darwin 14.6.1
	I0816 10:01:48.236650    3758 notify.go:220] Checking for updates...
	I0816 10:01:48.262360    3758 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 10:01:48.320415    3758 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1276/kubeconfig
	I0816 10:01:48.381368    3758 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0816 10:01:48.452505    3758 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 10:01:48.474398    3758 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1276/.minikube
	I0816 10:01:48.495459    3758 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 10:01:48.520120    3758 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 10:01:48.550379    3758 out.go:177] * Using the hyperkit driver based on user configuration
	I0816 10:01:48.592331    3758 start.go:297] selected driver: hyperkit
	I0816 10:01:48.592358    3758 start.go:901] validating driver "hyperkit" against <nil>
	I0816 10:01:48.592382    3758 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 10:01:48.596757    3758 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:01:48.596907    3758 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19461-1276/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0816 10:01:48.605545    3758 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0816 10:01:48.609748    3758 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:01:48.609772    3758 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0816 10:01:48.609805    3758 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 10:01:48.610046    3758 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 10:01:48.610094    3758 cni.go:84] Creating CNI manager for ""
	I0816 10:01:48.610103    3758 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0816 10:01:48.610108    3758 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0816 10:01:48.610236    3758 start.go:340] cluster config:
	{Name:ha-286000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-286000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docke
r CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0816 10:01:48.610323    3758 iso.go:125] acquiring lock: {Name:mkb430b43b5e7a8a683454482519837ed996c78d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:01:48.632334    3758 out.go:177] * Starting "ha-286000" primary control-plane node in "ha-286000" cluster
	I0816 10:01:48.653456    3758 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 10:01:48.653537    3758 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0816 10:01:48.653563    3758 cache.go:56] Caching tarball of preloaded images
	I0816 10:01:48.653777    3758 preload.go:172] Found /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0816 10:01:48.653813    3758 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 10:01:48.654332    3758 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:01:48.654370    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json: {Name:mk79fdae2e45cdfc987caaf16c5f211150e90dc5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:01:48.654995    3758 start.go:360] acquireMachinesLock for ha-286000: {Name:mk0510f0432b1e24fd07d899155e85dff8756d5a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:01:48.655106    3758 start.go:364] duration metric: took 87.458µs to acquireMachinesLock for "ha-286000"
	I0816 10:01:48.655154    3758 start.go:93] Provisioning new machine with config: &{Name:ha-286000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:ha-286000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:01:48.655246    3758 start.go:125] createHost starting for "" (driver="hyperkit")
	I0816 10:01:48.677450    3758 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0816 10:01:48.677701    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:01:48.677786    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:01:48.687593    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51004
	I0816 10:01:48.687935    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:01:48.688347    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:01:48.688357    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:01:48.688562    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:01:48.688668    3758 main.go:141] libmachine: (ha-286000) Calling .GetMachineName
	I0816 10:01:48.688765    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:01:48.688895    3758 start.go:159] libmachine.API.Create for "ha-286000" (driver="hyperkit")
	I0816 10:01:48.688916    3758 client.go:168] LocalClient.Create starting
	I0816 10:01:48.688948    3758 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem
	I0816 10:01:48.688999    3758 main.go:141] libmachine: Decoding PEM data...
	I0816 10:01:48.689012    3758 main.go:141] libmachine: Parsing certificate...
	I0816 10:01:48.689063    3758 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem
	I0816 10:01:48.689100    3758 main.go:141] libmachine: Decoding PEM data...
	I0816 10:01:48.689111    3758 main.go:141] libmachine: Parsing certificate...
	I0816 10:01:48.689128    3758 main.go:141] libmachine: Running pre-create checks...
	I0816 10:01:48.689135    3758 main.go:141] libmachine: (ha-286000) Calling .PreCreateCheck
	I0816 10:01:48.689224    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:48.689427    3758 main.go:141] libmachine: (ha-286000) Calling .GetConfigRaw
	I0816 10:01:48.698771    3758 main.go:141] libmachine: Creating machine...
	I0816 10:01:48.698794    3758 main.go:141] libmachine: (ha-286000) Calling .Create
	I0816 10:01:48.699018    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:48.699296    3758 main.go:141] libmachine: (ha-286000) DBG | I0816 10:01:48.699015    3766 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19461-1276/.minikube
	I0816 10:01:48.699450    3758 main.go:141] libmachine: (ha-286000) Downloading /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19461-1276/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0816 10:01:48.884950    3758 main.go:141] libmachine: (ha-286000) DBG | I0816 10:01:48.884833    3766 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa...
	I0816 10:01:48.986348    3758 main.go:141] libmachine: (ha-286000) DBG | I0816 10:01:48.986275    3766 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/ha-286000.rawdisk...
	I0816 10:01:48.986388    3758 main.go:141] libmachine: (ha-286000) DBG | Writing magic tar header
	I0816 10:01:48.986396    3758 main.go:141] libmachine: (ha-286000) DBG | Writing SSH key tar header
	I0816 10:01:48.987038    3758 main.go:141] libmachine: (ha-286000) DBG | I0816 10:01:48.987004    3766 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000 ...
	I0816 10:01:49.360700    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:49.360726    3758 main.go:141] libmachine: (ha-286000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/hyperkit.pid
	I0816 10:01:49.360741    3758 main.go:141] libmachine: (ha-286000) DBG | Using UUID ad96de67-e238-408c-89eb-d74e5b68d297
	I0816 10:01:49.471852    3758 main.go:141] libmachine: (ha-286000) DBG | Generated MAC 66:c8:48:4e:12:1b
	I0816 10:01:49.471873    3758 main.go:141] libmachine: (ha-286000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000
	I0816 10:01:49.471908    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"ad96de67-e238-408c-89eb-d74e5b68d297", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0000b2330)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 10:01:49.471951    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"ad96de67-e238-408c-89eb-d74e5b68d297", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0000b2330)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 10:01:49.471996    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "ad96de67-e238-408c-89eb-d74e5b68d297", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/ha-286000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/bzimage,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000"}
	I0816 10:01:49.472043    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U ad96de67-e238-408c-89eb-d74e5b68d297 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/ha-286000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/console-ring -f kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/bzimage,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000"
	I0816 10:01:49.472063    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0816 10:01:49.475179    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 DEBUG: hyperkit: Pid is 3771
	I0816 10:01:49.475583    3758 main.go:141] libmachine: (ha-286000) DBG | Attempt 0
	I0816 10:01:49.475597    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:49.475674    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:01:49.476510    3758 main.go:141] libmachine: (ha-286000) DBG | Searching for 66:c8:48:4e:12:1b in /var/db/dhcpd_leases ...
	I0816 10:01:49.476583    3758 main.go:141] libmachine: (ha-286000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0816 10:01:49.476592    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:01:49.476602    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:01:49.476612    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:01:49.482826    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0816 10:01:49.534430    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0816 10:01:49.535040    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:01:49.535056    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:01:49.535064    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:01:49.535072    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:01:49.909510    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0816 10:01:49.909527    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:49 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0816 10:01:50.024439    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:01:50.024459    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:01:50.024479    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:01:50.024494    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:01:50.025363    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:50 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0816 10:01:50.025375    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:50 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0816 10:01:51.477573    3758 main.go:141] libmachine: (ha-286000) DBG | Attempt 1
	I0816 10:01:51.477587    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:51.477660    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:01:51.478481    3758 main.go:141] libmachine: (ha-286000) DBG | Searching for 66:c8:48:4e:12:1b in /var/db/dhcpd_leases ...
	I0816 10:01:51.478504    3758 main.go:141] libmachine: (ha-286000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0816 10:01:51.478511    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:01:51.478520    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:01:51.478531    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:01:53.479582    3758 main.go:141] libmachine: (ha-286000) DBG | Attempt 2
	I0816 10:01:53.479599    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:53.479760    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:01:53.480630    3758 main.go:141] libmachine: (ha-286000) DBG | Searching for 66:c8:48:4e:12:1b in /var/db/dhcpd_leases ...
	I0816 10:01:53.480678    3758 main.go:141] libmachine: (ha-286000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0816 10:01:53.480688    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:01:53.480697    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:01:53.480710    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:01:55.482614    3758 main.go:141] libmachine: (ha-286000) DBG | Attempt 3
	I0816 10:01:55.482630    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:55.482703    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:01:55.483616    3758 main.go:141] libmachine: (ha-286000) DBG | Searching for 66:c8:48:4e:12:1b in /var/db/dhcpd_leases ...
	I0816 10:01:55.483662    3758 main.go:141] libmachine: (ha-286000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0816 10:01:55.483672    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:01:55.483679    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:01:55.483687    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:01:55.581983    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:55 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0816 10:01:55.582063    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:55 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0816 10:01:55.582075    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:55 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0816 10:01:55.606414    3758 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:01:55 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0816 10:01:57.484306    3758 main.go:141] libmachine: (ha-286000) DBG | Attempt 4
	I0816 10:01:57.484322    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:57.484414    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:01:57.485196    3758 main.go:141] libmachine: (ha-286000) DBG | Searching for 66:c8:48:4e:12:1b in /var/db/dhcpd_leases ...
	I0816 10:01:57.485261    3758 main.go:141] libmachine: (ha-286000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0816 10:01:57.485273    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:01:57.485286    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:01:57.485294    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:01:59.485978    3758 main.go:141] libmachine: (ha-286000) DBG | Attempt 5
	I0816 10:01:59.486008    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:59.486192    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:01:59.487610    3758 main.go:141] libmachine: (ha-286000) DBG | Searching for 66:c8:48:4e:12:1b in /var/db/dhcpd_leases ...
	I0816 10:01:59.487709    3758 main.go:141] libmachine: (ha-286000) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0816 10:01:59.487730    3758 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:01:59.487784    3758 main.go:141] libmachine: (ha-286000) DBG | Found match: 66:c8:48:4e:12:1b
	I0816 10:01:59.487797    3758 main.go:141] libmachine: (ha-286000) DBG | IP: 192.169.0.5
	I0816 10:01:59.487831    3758 main.go:141] libmachine: (ha-286000) Calling .GetConfigRaw
	I0816 10:01:59.488860    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:01:59.489069    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:01:59.489276    3758 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0816 10:01:59.489289    3758 main.go:141] libmachine: (ha-286000) Calling .GetState
	I0816 10:01:59.489463    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:01:59.489567    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:01:59.490615    3758 main.go:141] libmachine: Detecting operating system of created instance...
	I0816 10:01:59.490628    3758 main.go:141] libmachine: Waiting for SSH to be available...
	I0816 10:01:59.490644    3758 main.go:141] libmachine: Getting to WaitForSSH function...
	I0816 10:01:59.490652    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:01:59.490782    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:01:59.490923    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:01:59.491055    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:01:59.491190    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:01:59.491362    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:01:59.491595    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:01:59.491605    3758 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0816 10:01:59.511220    3758 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0816 10:02:02.573095    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 10:02:02.573108    3758 main.go:141] libmachine: Detecting the provisioner...
	I0816 10:02:02.573113    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:02.573255    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:02.573385    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:02.573489    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:02.573602    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:02.573753    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:02.573906    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:02:02.573914    3758 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0816 10:02:02.632510    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0816 10:02:02.632561    3758 main.go:141] libmachine: found compatible host: buildroot
	I0816 10:02:02.632568    3758 main.go:141] libmachine: Provisioning with buildroot...
	I0816 10:02:02.632573    3758 main.go:141] libmachine: (ha-286000) Calling .GetMachineName
	I0816 10:02:02.632721    3758 buildroot.go:166] provisioning hostname "ha-286000"
	I0816 10:02:02.632732    3758 main.go:141] libmachine: (ha-286000) Calling .GetMachineName
	I0816 10:02:02.632834    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:02.632956    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:02.633046    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:02.633122    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:02.633236    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:02.633363    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:02.633492    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:02:02.633500    3758 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-286000 && echo "ha-286000" | sudo tee /etc/hostname
	I0816 10:02:02.702336    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-286000
	
	I0816 10:02:02.702354    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:02.702482    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:02.702569    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:02.702670    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:02.702756    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:02.702893    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:02.703040    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:02:02.703052    3758 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-286000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-286000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-286000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 10:02:02.766997    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 10:02:02.767016    3758 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19461-1276/.minikube CaCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19461-1276/.minikube}
	I0816 10:02:02.767026    3758 buildroot.go:174] setting up certificates
	I0816 10:02:02.767038    3758 provision.go:84] configureAuth start
	I0816 10:02:02.767044    3758 main.go:141] libmachine: (ha-286000) Calling .GetMachineName
	I0816 10:02:02.767175    3758 main.go:141] libmachine: (ha-286000) Calling .GetIP
	I0816 10:02:02.767270    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:02.767372    3758 provision.go:143] copyHostCerts
	I0816 10:02:02.767406    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem
	I0816 10:02:02.767476    3758 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem, removing ...
	I0816 10:02:02.767485    3758 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem
	I0816 10:02:02.767630    3758 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem (1679 bytes)
	I0816 10:02:02.767837    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem
	I0816 10:02:02.767868    3758 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem, removing ...
	I0816 10:02:02.767873    3758 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem
	I0816 10:02:02.767991    3758 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem (1078 bytes)
	I0816 10:02:02.768146    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem
	I0816 10:02:02.768179    3758 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem, removing ...
	I0816 10:02:02.768184    3758 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem
	I0816 10:02:02.768268    3758 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem (1123 bytes)
	I0816 10:02:02.768410    3758 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem org=jenkins.ha-286000 san=[127.0.0.1 192.169.0.5 ha-286000 localhost minikube]
	I0816 10:02:02.915225    3758 provision.go:177] copyRemoteCerts
	I0816 10:02:02.915279    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 10:02:02.915299    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:02.915444    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:02.915558    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:02.915640    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:02.915723    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:02:02.953069    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0816 10:02:02.953145    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0816 10:02:02.973516    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0816 10:02:02.973600    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0816 10:02:02.993038    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0816 10:02:02.993100    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 10:02:03.013565    3758 provision.go:87] duration metric: took 246.514857ms to configureAuth
	I0816 10:02:03.013581    3758 buildroot.go:189] setting minikube options for container-runtime
	I0816 10:02:03.013724    3758 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:02:03.013737    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:03.013889    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:03.013978    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:03.014067    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:03.014147    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:03.014250    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:03.014369    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:03.014498    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:02:03.014506    3758 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0816 10:02:03.073645    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0816 10:02:03.073660    3758 buildroot.go:70] root file system type: tmpfs
	I0816 10:02:03.073734    3758 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0816 10:02:03.073745    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:03.073872    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:03.073966    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:03.074063    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:03.074164    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:03.074292    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:03.074429    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:02:03.074479    3758 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0816 10:02:03.148891    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0816 10:02:03.148912    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:03.149052    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:03.149138    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:03.149235    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:03.149324    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:03.149466    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:03.149604    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:02:03.149616    3758 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0816 10:02:04.780498    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0816 10:02:04.780519    3758 main.go:141] libmachine: Checking connection to Docker...
	I0816 10:02:04.780526    3758 main.go:141] libmachine: (ha-286000) Calling .GetURL
	I0816 10:02:04.780691    3758 main.go:141] libmachine: Docker is up and running!
	I0816 10:02:04.780698    3758 main.go:141] libmachine: Reticulating splines...
	I0816 10:02:04.780702    3758 client.go:171] duration metric: took 16.091876944s to LocalClient.Create
	I0816 10:02:04.780713    3758 start.go:167] duration metric: took 16.091916939s to libmachine.API.Create "ha-286000"
	I0816 10:02:04.780722    3758 start.go:293] postStartSetup for "ha-286000" (driver="hyperkit")
	I0816 10:02:04.780729    3758 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 10:02:04.780738    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:04.780873    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 10:02:04.780884    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:04.780956    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:04.781047    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:04.781154    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:04.781275    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:02:04.819650    3758 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 10:02:04.823314    3758 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 10:02:04.823335    3758 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19461-1276/.minikube/addons for local assets ...
	I0816 10:02:04.823443    3758 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19461-1276/.minikube/files for local assets ...
	I0816 10:02:04.823624    3758 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> 18312.pem in /etc/ssl/certs
	I0816 10:02:04.823631    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> /etc/ssl/certs/18312.pem
	I0816 10:02:04.823837    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 10:02:04.831860    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem --> /etc/ssl/certs/18312.pem (1708 bytes)
	I0816 10:02:04.850901    3758 start.go:296] duration metric: took 70.172878ms for postStartSetup
	I0816 10:02:04.850935    3758 main.go:141] libmachine: (ha-286000) Calling .GetConfigRaw
	I0816 10:02:04.851561    3758 main.go:141] libmachine: (ha-286000) Calling .GetIP
	I0816 10:02:04.851702    3758 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:02:04.852053    3758 start.go:128] duration metric: took 16.196888252s to createHost
	I0816 10:02:04.852071    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:04.852167    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:04.852261    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:04.852344    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:04.852420    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:04.852525    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:04.852648    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:02:04.852655    3758 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 10:02:04.911299    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723827724.986438448
	
	I0816 10:02:04.911317    3758 fix.go:216] guest clock: 1723827724.986438448
	I0816 10:02:04.911322    3758 fix.go:229] Guest: 2024-08-16 10:02:04.986438448 -0700 PDT Remote: 2024-08-16 10:02:04.852061 -0700 PDT m=+16.776125584 (delta=134.377448ms)
	I0816 10:02:04.911341    3758 fix.go:200] guest clock delta is within tolerance: 134.377448ms
	I0816 10:02:04.911345    3758 start.go:83] releasing machines lock for "ha-286000", held for 16.256325696s
	I0816 10:02:04.911364    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:04.911498    3758 main.go:141] libmachine: (ha-286000) Calling .GetIP
	I0816 10:02:04.911592    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:04.911942    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:04.912068    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:04.912144    3758 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 10:02:04.912171    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:04.912218    3758 ssh_runner.go:195] Run: cat /version.json
	I0816 10:02:04.912231    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:04.912262    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:04.912305    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:04.912360    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:04.912384    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:04.912437    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:04.912464    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:04.912547    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:02:04.912567    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:02:04.945688    3758 ssh_runner.go:195] Run: systemctl --version
	I0816 10:02:04.997158    3758 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 10:02:05.002278    3758 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 10:02:05.002315    3758 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 10:02:05.015089    3758 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 10:02:05.015098    3758 start.go:495] detecting cgroup driver to use...
	I0816 10:02:05.015195    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 10:02:05.030338    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0816 10:02:05.038588    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0816 10:02:05.046898    3758 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0816 10:02:05.046932    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0816 10:02:05.055211    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 10:02:05.063533    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0816 10:02:05.073244    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 10:02:05.081895    3758 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 10:02:05.090915    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0816 10:02:05.099773    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0816 10:02:05.108719    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0816 10:02:05.117772    3758 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 10:02:05.125574    3758 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 10:02:05.145403    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:05.261568    3758 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0816 10:02:05.278920    3758 start.go:495] detecting cgroup driver to use...
	I0816 10:02:05.278996    3758 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0816 10:02:05.293925    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 10:02:05.306704    3758 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 10:02:05.320000    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 10:02:05.330230    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 10:02:05.340255    3758 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0816 10:02:05.369098    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 10:02:05.379374    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 10:02:05.395120    3758 ssh_runner.go:195] Run: which cri-dockerd
	I0816 10:02:05.397921    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0816 10:02:05.404866    3758 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0816 10:02:05.417935    3758 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0816 10:02:05.514793    3758 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0816 10:02:05.626208    3758 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0816 10:02:05.626292    3758 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0816 10:02:05.640317    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:05.737978    3758 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0816 10:02:08.105430    3758 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.367458496s)
	I0816 10:02:08.105491    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0816 10:02:08.115970    3758 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0816 10:02:08.129325    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0816 10:02:08.140312    3758 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0816 10:02:08.237628    3758 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0816 10:02:08.340285    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:08.436168    3758 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0816 10:02:08.457808    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0816 10:02:08.468314    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:08.561830    3758 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0816 10:02:08.620080    3758 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0816 10:02:08.620164    3758 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0816 10:02:08.624716    3758 start.go:563] Will wait 60s for crictl version
	I0816 10:02:08.624764    3758 ssh_runner.go:195] Run: which crictl
	I0816 10:02:08.627806    3758 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 10:02:08.660437    3758 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0816 10:02:08.660505    3758 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0816 10:02:08.678073    3758 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0816 10:02:08.722165    3758 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0816 10:02:08.722217    3758 main.go:141] libmachine: (ha-286000) Calling .GetIP
	I0816 10:02:08.722605    3758 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0816 10:02:08.727140    3758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 10:02:08.736769    3758 kubeadm.go:883] updating cluster {Name:ha-286000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:ha-286000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 10:02:08.736830    3758 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 10:02:08.736887    3758 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0816 10:02:08.748223    3758 docker.go:685] Got preloaded images: 
	I0816 10:02:08.748236    3758 docker.go:691] registry.k8s.io/kube-apiserver:v1.31.0 wasn't preloaded
	I0816 10:02:08.748294    3758 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0816 10:02:08.755601    3758 ssh_runner.go:195] Run: which lz4
	I0816 10:02:08.758510    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0816 10:02:08.758630    3758 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 10:02:08.761594    3758 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 10:02:08.761610    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (342554258 bytes)
	I0816 10:02:09.771496    3758 docker.go:649] duration metric: took 1.012922149s to copy over tarball
	I0816 10:02:09.771559    3758 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 10:02:12.050040    3758 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.27849665s)
	I0816 10:02:12.050054    3758 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 10:02:12.075438    3758 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0816 10:02:12.083331    3758 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0816 10:02:12.097261    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:12.204348    3758 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0816 10:02:14.516272    3758 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.311936705s)
	I0816 10:02:14.516363    3758 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0816 10:02:14.529500    3758 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0816 10:02:14.529519    3758 cache_images.go:84] Images are preloaded, skipping loading
	I0816 10:02:14.529530    3758 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.0 docker true true} ...
	I0816 10:02:14.529607    3758 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-286000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-286000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 10:02:14.529674    3758 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0816 10:02:14.568093    3758 cni.go:84] Creating CNI manager for ""
	I0816 10:02:14.568107    3758 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0816 10:02:14.568120    3758 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 10:02:14.568134    3758 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-286000 NodeName:ha-286000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 10:02:14.568222    3758 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-286000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 10:02:14.568237    3758 kube-vip.go:115] generating kube-vip config ...
	I0816 10:02:14.568286    3758 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0816 10:02:14.580706    3758 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0816 10:02:14.580778    3758 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0816 10:02:14.580832    3758 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 10:02:14.588124    3758 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 10:02:14.588174    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0816 10:02:14.595283    3758 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0816 10:02:14.608721    3758 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 10:02:14.622036    3758 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0816 10:02:14.635525    3758 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1446 bytes)
	I0816 10:02:14.648868    3758 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0816 10:02:14.651748    3758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 10:02:14.661305    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:14.752561    3758 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 10:02:14.768967    3758 certs.go:68] Setting up /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000 for IP: 192.169.0.5
	I0816 10:02:14.768980    3758 certs.go:194] generating shared ca certs ...
	I0816 10:02:14.768991    3758 certs.go:226] acquiring lock for ca certs: {Name:mka549fbc467849834d2267da204028c49d88a3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:14.769183    3758 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key
	I0816 10:02:14.769258    3758 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key
	I0816 10:02:14.769269    3758 certs.go:256] generating profile certs ...
	I0816 10:02:14.769315    3758 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.key
	I0816 10:02:14.769328    3758 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.crt with IP's: []
	I0816 10:02:14.849573    3758 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.crt ...
	I0816 10:02:14.849589    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.crt: {Name:mk9c6a2b5871e8d33f12e0a58268b028ea37ab4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:14.849911    3758 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.key ...
	I0816 10:02:14.849919    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.key: {Name:mk7d8ed264e583d7ced6b16b60c72c11b15a1f22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:14.850137    3758 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.af99fd6a
	I0816 10:02:14.850151    3758 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.af99fd6a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.254]
	I0816 10:02:14.968129    3758 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.af99fd6a ...
	I0816 10:02:14.968143    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.af99fd6a: {Name:mk3b8945a16852dca8274ec1ab3a9f2eade80831 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:14.968464    3758 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.af99fd6a ...
	I0816 10:02:14.968473    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.af99fd6a: {Name:mk90fcce3348a214c4733fe5a1fb806faff7773f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:14.968717    3758 certs.go:381] copying /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.af99fd6a -> /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt
	I0816 10:02:14.968940    3758 certs.go:385] copying /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.af99fd6a -> /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key
	I0816 10:02:14.969171    3758 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key
	I0816 10:02:14.969185    3758 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.crt with IP's: []
	I0816 10:02:15.029329    3758 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.crt ...
	I0816 10:02:15.029338    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.crt: {Name:mk70aaa3903fb58df2124d779dbea07aa95769f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:15.029604    3758 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key ...
	I0816 10:02:15.029611    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key: {Name:mka3a85354927e8da31f9dfc67f92dea3c77ca65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:15.029850    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0816 10:02:15.029876    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0816 10:02:15.029898    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0816 10:02:15.029940    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0816 10:02:15.029958    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0816 10:02:15.029990    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0816 10:02:15.030008    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0816 10:02:15.030025    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0816 10:02:15.030118    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem (1338 bytes)
	W0816 10:02:15.030163    3758 certs.go:480] ignoring /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831_empty.pem, impossibly tiny 0 bytes
	I0816 10:02:15.030171    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 10:02:15.030240    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem (1078 bytes)
	I0816 10:02:15.030268    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem (1123 bytes)
	I0816 10:02:15.030354    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem (1679 bytes)
	I0816 10:02:15.030467    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem (1708 bytes)
	I0816 10:02:15.030499    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:02:15.030525    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem -> /usr/share/ca-certificates/1831.pem
	I0816 10:02:15.030542    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> /usr/share/ca-certificates/18312.pem
	I0816 10:02:15.031030    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 10:02:15.051618    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0816 10:02:15.071318    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 10:02:15.091017    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 10:02:15.110497    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0816 10:02:15.131033    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 10:02:15.150747    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 10:02:15.170186    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 10:02:15.189808    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 10:02:15.209868    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem --> /usr/share/ca-certificates/1831.pem (1338 bytes)
	I0816 10:02:15.229378    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem --> /usr/share/ca-certificates/18312.pem (1708 bytes)
	I0816 10:02:15.248667    3758 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 10:02:15.262035    3758 ssh_runner.go:195] Run: openssl version
	I0816 10:02:15.266135    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 10:02:15.275959    3758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:02:15.279367    3758 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 16:48 /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:02:15.279403    3758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:02:15.283611    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 10:02:15.291872    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1831.pem && ln -fs /usr/share/ca-certificates/1831.pem /etc/ssl/certs/1831.pem"
	I0816 10:02:15.299890    3758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1831.pem
	I0816 10:02:15.303179    3758 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 16:57 /usr/share/ca-certificates/1831.pem
	I0816 10:02:15.303212    3758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1831.pem
	I0816 10:02:15.307423    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1831.pem /etc/ssl/certs/51391683.0"
	I0816 10:02:15.315741    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18312.pem && ln -fs /usr/share/ca-certificates/18312.pem /etc/ssl/certs/18312.pem"
	I0816 10:02:15.324088    3758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18312.pem
	I0816 10:02:15.327441    3758 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 16:57 /usr/share/ca-certificates/18312.pem
	I0816 10:02:15.327479    3758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18312.pem
	I0816 10:02:15.331723    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18312.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 10:02:15.339921    3758 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 10:02:15.343061    3758 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0816 10:02:15.343109    3758 kubeadm.go:392] StartCluster: {Name:ha-286000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:ha-286000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 10:02:15.343194    3758 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0816 10:02:15.356016    3758 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 10:02:15.363405    3758 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 10:02:15.370635    3758 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 10:02:15.377806    3758 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 10:02:15.377819    3758 kubeadm.go:157] found existing configuration files:
	
	I0816 10:02:15.377855    3758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 10:02:15.384868    3758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 10:02:15.384903    3758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 10:02:15.392001    3758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 10:02:15.398935    3758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 10:02:15.398971    3758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 10:02:15.406181    3758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 10:02:15.413108    3758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 10:02:15.413142    3758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 10:02:15.422603    3758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 10:02:15.435692    3758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 10:02:15.435749    3758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 10:02:15.450920    3758 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 10:02:15.521417    3758 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0816 10:02:15.521501    3758 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 10:02:15.590843    3758 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 10:02:15.590923    3758 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 10:02:15.591008    3758 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0816 10:02:15.600874    3758 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 10:02:15.632377    3758 out.go:235]   - Generating certificates and keys ...
	I0816 10:02:15.632427    3758 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 10:02:15.632475    3758 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 10:02:15.791885    3758 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0816 10:02:15.852803    3758 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0816 10:02:15.961982    3758 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0816 10:02:16.146557    3758 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0816 10:02:16.493401    3758 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0816 10:02:16.493556    3758 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-286000 localhost] and IPs [192.169.0.5 127.0.0.1 ::1]
	I0816 10:02:16.704832    3758 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0816 10:02:16.705108    3758 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-286000 localhost] and IPs [192.169.0.5 127.0.0.1 ::1]
	I0816 10:02:16.922818    3758 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0816 10:02:17.082810    3758 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0816 10:02:17.393939    3758 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0816 10:02:17.394070    3758 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 10:02:17.465159    3758 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 10:02:17.731984    3758 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0816 10:02:18.019833    3758 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 10:02:18.164168    3758 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 10:02:18.273756    3758 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 10:02:18.274215    3758 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 10:02:18.275949    3758 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 10:02:18.300490    3758 out.go:235]   - Booting up control plane ...
	I0816 10:02:18.300569    3758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 10:02:18.300638    3758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 10:02:18.300693    3758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 10:02:18.300780    3758 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 10:02:18.300850    3758 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 10:02:18.300879    3758 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 10:02:18.405245    3758 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0816 10:02:18.405330    3758 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0816 10:02:18.909805    3758 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 504.760374ms
	I0816 10:02:18.909928    3758 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0816 10:02:24.844637    3758 kubeadm.go:310] [api-check] The API server is healthy after 5.938882642s
	I0816 10:02:24.854864    3758 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0816 10:02:24.862134    3758 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0816 10:02:24.890940    3758 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0816 10:02:24.891101    3758 kubeadm.go:310] [mark-control-plane] Marking the node ha-286000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0816 10:02:24.901441    3758 kubeadm.go:310] [bootstrap-token] Using token: 73merd.1elxantqs1p5mkiz
	I0816 10:02:24.939545    3758 out.go:235]   - Configuring RBAC rules ...
	I0816 10:02:24.939737    3758 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0816 10:02:24.944670    3758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0816 10:02:24.951393    3758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0816 10:02:24.953383    3758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0816 10:02:24.955899    3758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0816 10:02:24.958387    3758 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0816 10:02:25.251799    3758 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0816 10:02:25.676108    3758 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0816 10:02:26.249233    3758 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0816 10:02:26.249936    3758 kubeadm.go:310] 
	I0816 10:02:26.250001    3758 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0816 10:02:26.250014    3758 kubeadm.go:310] 
	I0816 10:02:26.250090    3758 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0816 10:02:26.250102    3758 kubeadm.go:310] 
	I0816 10:02:26.250122    3758 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0816 10:02:26.250175    3758 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0816 10:02:26.250221    3758 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0816 10:02:26.250229    3758 kubeadm.go:310] 
	I0816 10:02:26.250268    3758 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0816 10:02:26.250272    3758 kubeadm.go:310] 
	I0816 10:02:26.250305    3758 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0816 10:02:26.250313    3758 kubeadm.go:310] 
	I0816 10:02:26.250359    3758 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0816 10:02:26.250425    3758 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0816 10:02:26.250484    3758 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0816 10:02:26.250491    3758 kubeadm.go:310] 
	I0816 10:02:26.250569    3758 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0816 10:02:26.250648    3758 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0816 10:02:26.250656    3758 kubeadm.go:310] 
	I0816 10:02:26.250716    3758 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 73merd.1elxantqs1p5mkiz \
	I0816 10:02:26.250793    3758 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:995590413ea712c4f7db2cf849d28264ebc291e6cd53d741ce1d22c1daaa1602 \
	I0816 10:02:26.250811    3758 kubeadm.go:310] 	--control-plane 
	I0816 10:02:26.250817    3758 kubeadm.go:310] 
	I0816 10:02:26.250879    3758 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0816 10:02:26.250885    3758 kubeadm.go:310] 
	I0816 10:02:26.250947    3758 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 73merd.1elxantqs1p5mkiz \
	I0816 10:02:26.251036    3758 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:995590413ea712c4f7db2cf849d28264ebc291e6cd53d741ce1d22c1daaa1602 
	I0816 10:02:26.251297    3758 kubeadm.go:310] W0816 17:02:15.599099    1566 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 10:02:26.251521    3758 kubeadm.go:310] W0816 17:02:15.600578    1566 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 10:02:26.251608    3758 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 10:02:26.251620    3758 cni.go:84] Creating CNI manager for ""
	I0816 10:02:26.251624    3758 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0816 10:02:26.289324    3758 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0816 10:02:26.363318    3758 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0816 10:02:26.368971    3758 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0816 10:02:26.368983    3758 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0816 10:02:26.382855    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0816 10:02:26.629869    3758 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 10:02:26.629947    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-286000 minikube.k8s.io/updated_at=2024_08_16T10_02_26_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd minikube.k8s.io/name=ha-286000 minikube.k8s.io/primary=true
	I0816 10:02:26.629949    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 10:02:26.658712    3758 ops.go:34] apiserver oom_adj: -16
	I0816 10:02:26.756402    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 10:02:27.257667    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 10:02:27.757259    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 10:02:28.256864    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 10:02:28.757602    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 10:02:29.256802    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 10:02:29.757282    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 10:02:29.882342    3758 kubeadm.go:1113] duration metric: took 3.252511045s to wait for elevateKubeSystemPrivileges
	I0816 10:02:29.882365    3758 kubeadm.go:394] duration metric: took 14.539506386s to StartCluster
	I0816 10:02:29.882380    3758 settings.go:142] acquiring lock: {Name:mk60e377244eac756871b74b6bd45115654652a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:29.882471    3758 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19461-1276/kubeconfig
	I0816 10:02:29.882922    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/kubeconfig: {Name:mk7f4491c8ec5cf96c96a5a1a5dbfba4301394a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:29.883167    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0816 10:02:29.883170    3758 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:02:29.883182    3758 start.go:241] waiting for startup goroutines ...
	I0816 10:02:29.883204    3758 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 10:02:29.883238    3758 addons.go:69] Setting storage-provisioner=true in profile "ha-286000"
	I0816 10:02:29.883244    3758 addons.go:69] Setting default-storageclass=true in profile "ha-286000"
	I0816 10:02:29.883261    3758 addons.go:234] Setting addon storage-provisioner=true in "ha-286000"
	I0816 10:02:29.883278    3758 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-286000"
	I0816 10:02:29.883280    3758 host.go:66] Checking if "ha-286000" exists ...
	I0816 10:02:29.883305    3758 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:02:29.883558    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:02:29.883573    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:02:29.883575    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:02:29.883598    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:02:29.892969    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51028
	I0816 10:02:29.893238    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51030
	I0816 10:02:29.893356    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:02:29.893605    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:02:29.893730    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:02:29.893741    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:02:29.893938    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:02:29.893951    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:02:29.893976    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:02:29.894142    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:02:29.894254    3758 main.go:141] libmachine: (ha-286000) Calling .GetState
	I0816 10:02:29.894338    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:29.894365    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:02:29.894397    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:02:29.894424    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:02:29.896690    3758 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19461-1276/kubeconfig
	I0816 10:02:29.896968    3758 kapi.go:59] client config for ha-286000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.key", CAFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xa202f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0816 10:02:29.897373    3758 cert_rotation.go:140] Starting client certificate rotation controller
	I0816 10:02:29.897530    3758 addons.go:234] Setting addon default-storageclass=true in "ha-286000"
	I0816 10:02:29.897550    3758 host.go:66] Checking if "ha-286000" exists ...
	I0816 10:02:29.897771    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:02:29.897789    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:02:29.903669    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51032
	I0816 10:02:29.904077    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:02:29.904431    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:02:29.904451    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:02:29.904736    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:02:29.904889    3758 main.go:141] libmachine: (ha-286000) Calling .GetState
	I0816 10:02:29.904993    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:29.905054    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:02:29.906086    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:29.906730    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51034
	I0816 10:02:29.907081    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:02:29.907463    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:02:29.907491    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:02:29.907694    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:02:29.908045    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:02:29.908061    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:02:29.917300    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51036
	I0816 10:02:29.917667    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:02:29.918020    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:02:29.918034    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:02:29.918277    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:02:29.918384    3758 main.go:141] libmachine: (ha-286000) Calling .GetState
	I0816 10:02:29.918483    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:29.918572    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:02:29.919534    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:29.919671    3758 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 10:02:29.919680    3758 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 10:02:29.919688    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:29.919789    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:29.919896    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:29.919990    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:29.920072    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:02:29.928735    3758 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 10:02:29.965711    3758 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 10:02:29.965725    3758 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 10:02:29.965744    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:29.965926    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:29.966043    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:29.966151    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:29.966280    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:02:30.033245    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.169.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0816 10:02:30.037035    3758 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 10:02:30.090040    3758 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 10:02:30.396267    3758 start.go:971] {"host.minikube.internal": 192.169.0.1} host record injected into CoreDNS's ConfigMap
	I0816 10:02:30.396311    3758 main.go:141] libmachine: Making call to close driver server
	I0816 10:02:30.396323    3758 main.go:141] libmachine: (ha-286000) Calling .Close
	I0816 10:02:30.396482    3758 main.go:141] libmachine: Successfully made call to close driver server
	I0816 10:02:30.396488    3758 main.go:141] libmachine: (ha-286000) DBG | Closing plugin on server side
	I0816 10:02:30.396492    3758 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 10:02:30.396501    3758 main.go:141] libmachine: Making call to close driver server
	I0816 10:02:30.396507    3758 main.go:141] libmachine: (ha-286000) Calling .Close
	I0816 10:02:30.396655    3758 main.go:141] libmachine: Successfully made call to close driver server
	I0816 10:02:30.396662    3758 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 10:02:30.396713    3758 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0816 10:02:30.396725    3758 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0816 10:02:30.396804    3758 round_trippers.go:463] GET https://192.169.0.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0816 10:02:30.396810    3758 round_trippers.go:469] Request Headers:
	I0816 10:02:30.396817    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:02:30.396822    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:02:30.402286    3758 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0816 10:02:30.402706    3758 round_trippers.go:463] PUT https://192.169.0.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0816 10:02:30.402713    3758 round_trippers.go:469] Request Headers:
	I0816 10:02:30.402718    3758 round_trippers.go:473]     Content-Type: application/json
	I0816 10:02:30.402721    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:02:30.402723    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:02:30.404290    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:02:30.404403    3758 main.go:141] libmachine: Making call to close driver server
	I0816 10:02:30.404416    3758 main.go:141] libmachine: (ha-286000) Calling .Close
	I0816 10:02:30.404565    3758 main.go:141] libmachine: Successfully made call to close driver server
	I0816 10:02:30.404575    3758 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 10:02:30.404586    3758 main.go:141] libmachine: (ha-286000) DBG | Closing plugin on server side
	I0816 10:02:30.497806    3758 main.go:141] libmachine: Making call to close driver server
	I0816 10:02:30.497818    3758 main.go:141] libmachine: (ha-286000) Calling .Close
	I0816 10:02:30.498006    3758 main.go:141] libmachine: Successfully made call to close driver server
	I0816 10:02:30.498011    3758 main.go:141] libmachine: (ha-286000) DBG | Closing plugin on server side
	I0816 10:02:30.498016    3758 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 10:02:30.498023    3758 main.go:141] libmachine: Making call to close driver server
	I0816 10:02:30.498040    3758 main.go:141] libmachine: (ha-286000) Calling .Close
	I0816 10:02:30.498160    3758 main.go:141] libmachine: Successfully made call to close driver server
	I0816 10:02:30.498168    3758 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 10:02:30.498179    3758 main.go:141] libmachine: (ha-286000) DBG | Closing plugin on server side
	I0816 10:02:30.538607    3758 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0816 10:02:30.596522    3758 addons.go:510] duration metric: took 713.336883ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0816 10:02:30.596578    3758 start.go:246] waiting for cluster config update ...
	I0816 10:02:30.596599    3758 start.go:255] writing updated cluster config ...
	I0816 10:02:30.634683    3758 out.go:201] 
	I0816 10:02:30.655754    3758 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:02:30.655861    3758 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:02:30.677634    3758 out.go:177] * Starting "ha-286000-m02" control-plane node in "ha-286000" cluster
	I0816 10:02:30.737443    3758 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 10:02:30.737475    3758 cache.go:56] Caching tarball of preloaded images
	I0816 10:02:30.737660    3758 preload.go:172] Found /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0816 10:02:30.737679    3758 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 10:02:30.737771    3758 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:02:30.738414    3758 start.go:360] acquireMachinesLock for ha-286000-m02: {Name:mk0510f0432b1e24fd07d899155e85dff8756d5a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:02:30.738494    3758 start.go:364] duration metric: took 63.585µs to acquireMachinesLock for "ha-286000-m02"
	I0816 10:02:30.738517    3758 start.go:93] Provisioning new machine with config: &{Name:ha-286000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:ha-286000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:02:30.738590    3758 start.go:125] createHost starting for "m02" (driver="hyperkit")
	I0816 10:02:30.760743    3758 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0816 10:02:30.760905    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:02:30.760935    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:02:30.771028    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51041
	I0816 10:02:30.771383    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:02:30.771745    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:02:30.771760    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:02:30.771967    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:02:30.772077    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetMachineName
	I0816 10:02:30.772157    3758 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:02:30.772261    3758 start.go:159] libmachine.API.Create for "ha-286000" (driver="hyperkit")
	I0816 10:02:30.772276    3758 client.go:168] LocalClient.Create starting
	I0816 10:02:30.772303    3758 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem
	I0816 10:02:30.772358    3758 main.go:141] libmachine: Decoding PEM data...
	I0816 10:02:30.772368    3758 main.go:141] libmachine: Parsing certificate...
	I0816 10:02:30.772413    3758 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem
	I0816 10:02:30.772450    3758 main.go:141] libmachine: Decoding PEM data...
	I0816 10:02:30.772460    3758 main.go:141] libmachine: Parsing certificate...
	I0816 10:02:30.772472    3758 main.go:141] libmachine: Running pre-create checks...
	I0816 10:02:30.772477    3758 main.go:141] libmachine: (ha-286000-m02) Calling .PreCreateCheck
	I0816 10:02:30.772545    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:30.772568    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetConfigRaw
	I0816 10:02:30.781772    3758 main.go:141] libmachine: Creating machine...
	I0816 10:02:30.781789    3758 main.go:141] libmachine: (ha-286000-m02) Calling .Create
	I0816 10:02:30.781943    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:30.782181    3758 main.go:141] libmachine: (ha-286000-m02) DBG | I0816 10:02:30.781934    3805 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19461-1276/.minikube
	I0816 10:02:30.782269    3758 main.go:141] libmachine: (ha-286000-m02) Downloading /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19461-1276/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0816 10:02:30.982082    3758 main.go:141] libmachine: (ha-286000-m02) DBG | I0816 10:02:30.982020    3805 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/id_rsa...
	I0816 10:02:31.266609    3758 main.go:141] libmachine: (ha-286000-m02) DBG | I0816 10:02:31.266514    3805 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/ha-286000-m02.rawdisk...
	I0816 10:02:31.266629    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Writing magic tar header
	I0816 10:02:31.266638    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Writing SSH key tar header
	I0816 10:02:31.267382    3758 main.go:141] libmachine: (ha-286000-m02) DBG | I0816 10:02:31.267242    3805 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02 ...
	I0816 10:02:31.642864    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:31.642889    3758 main.go:141] libmachine: (ha-286000-m02) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/hyperkit.pid
	I0816 10:02:31.642912    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Using UUID f7630301-2bc3-4935-a751-ac72999da031
	I0816 10:02:31.669349    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Generated MAC 72:69:8f:11:68:1d
	I0816 10:02:31.669369    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000
	I0816 10:02:31.669397    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"f7630301-2bc3-4935-a751-ac72999da031", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 10:02:31.669426    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"f7630301-2bc3-4935-a751-ac72999da031", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 10:02:31.669455    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "f7630301-2bc3-4935-a751-ac72999da031", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/ha-286000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/bzimage,/Users/jenkins/minikube-integration/19461-1276/.minikube/machine
s/ha-286000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000"}
	I0816 10:02:31.669484    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U f7630301-2bc3-4935-a751-ac72999da031 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/ha-286000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/bzimage,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000"
	I0816 10:02:31.669499    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0816 10:02:31.672492    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 DEBUG: hyperkit: Pid is 3806
	I0816 10:02:31.672957    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Attempt 0
	I0816 10:02:31.673000    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:31.673054    3758 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid from json: 3806
	I0816 10:02:31.674014    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Searching for 72:69:8f:11:68:1d in /var/db/dhcpd_leases ...
	I0816 10:02:31.674086    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0816 10:02:31.674098    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:02:31.674120    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:02:31.674129    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:02:31.674137    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:02:31.680025    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0816 10:02:31.688109    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0816 10:02:31.688968    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:02:31.688993    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:02:31.689006    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:02:31.689016    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:02:32.077133    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:32 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0816 10:02:32.077153    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:32 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0816 10:02:32.191789    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:02:32.191806    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:02:32.191814    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:02:32.191825    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:02:32.192676    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:32 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0816 10:02:32.192686    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:32 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0816 10:02:33.675165    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Attempt 1
	I0816 10:02:33.675183    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:33.675297    3758 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid from json: 3806
	I0816 10:02:33.676089    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Searching for 72:69:8f:11:68:1d in /var/db/dhcpd_leases ...
	I0816 10:02:33.676141    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0816 10:02:33.676153    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:02:33.676162    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:02:33.676169    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:02:33.676193    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:02:35.676266    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Attempt 2
	I0816 10:02:35.676285    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:35.676360    3758 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid from json: 3806
	I0816 10:02:35.677182    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Searching for 72:69:8f:11:68:1d in /var/db/dhcpd_leases ...
	I0816 10:02:35.677237    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0816 10:02:35.677248    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:02:35.677262    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:02:35.677270    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:02:35.677281    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:02:37.678515    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Attempt 3
	I0816 10:02:37.678532    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:37.678572    3758 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid from json: 3806
	I0816 10:02:37.679405    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Searching for 72:69:8f:11:68:1d in /var/db/dhcpd_leases ...
	I0816 10:02:37.679441    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0816 10:02:37.679451    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:02:37.679461    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:02:37.679468    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:02:37.679483    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:02:37.782496    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:37 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0816 10:02:37.782531    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:37 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0816 10:02:37.782540    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:37 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0816 10:02:37.806630    3758 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:02:37 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0816 10:02:39.680161    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Attempt 4
	I0816 10:02:39.680178    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:39.680273    3758 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid from json: 3806
	I0816 10:02:39.681064    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Searching for 72:69:8f:11:68:1d in /var/db/dhcpd_leases ...
	I0816 10:02:39.681112    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0816 10:02:39.681136    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:02:39.681155    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:02:39.681170    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:02:39.681181    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:02:41.681380    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Attempt 5
	I0816 10:02:41.681414    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:41.681563    3758 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid from json: 3806
	I0816 10:02:41.682343    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Searching for 72:69:8f:11:68:1d in /var/db/dhcpd_leases ...
	I0816 10:02:41.682392    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0816 10:02:41.682406    3758 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0d7b0}
	I0816 10:02:41.682415    3758 main.go:141] libmachine: (ha-286000-m02) DBG | Found match: 72:69:8f:11:68:1d
	I0816 10:02:41.682421    3758 main.go:141] libmachine: (ha-286000-m02) DBG | IP: 192.169.0.6
	I0816 10:02:41.682475    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetConfigRaw
	I0816 10:02:41.683135    3758 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:02:41.683257    3758 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:02:41.683358    3758 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0816 10:02:41.683367    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetState
	I0816 10:02:41.683478    3758 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:41.683537    3758 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid from json: 3806
	I0816 10:02:41.684414    3758 main.go:141] libmachine: Detecting operating system of created instance...
	I0816 10:02:41.684423    3758 main.go:141] libmachine: Waiting for SSH to be available...
	I0816 10:02:41.684427    3758 main.go:141] libmachine: Getting to WaitForSSH function...
	I0816 10:02:41.684431    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:41.684566    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:41.684692    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:41.684809    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:41.684943    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:41.685095    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:41.685300    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:02:41.685308    3758 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0816 10:02:42.746559    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 10:02:42.746571    3758 main.go:141] libmachine: Detecting the provisioner...
	I0816 10:02:42.746577    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:42.746714    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:42.746824    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:42.746914    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:42.746988    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:42.747112    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:42.747269    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:02:42.747277    3758 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0816 10:02:42.811630    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0816 10:02:42.811666    3758 main.go:141] libmachine: found compatible host: buildroot
	I0816 10:02:42.811672    3758 main.go:141] libmachine: Provisioning with buildroot...
	I0816 10:02:42.811681    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetMachineName
	I0816 10:02:42.811816    3758 buildroot.go:166] provisioning hostname "ha-286000-m02"
	I0816 10:02:42.811828    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetMachineName
	I0816 10:02:42.811943    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:42.812045    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:42.812136    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:42.812274    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:42.812376    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:42.812513    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:42.812673    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:02:42.812682    3758 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-286000-m02 && echo "ha-286000-m02" | sudo tee /etc/hostname
	I0816 10:02:42.887531    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-286000-m02
	
	I0816 10:02:42.887545    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:42.887676    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:42.887769    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:42.887867    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:42.887953    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:42.888075    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:42.888214    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:02:42.888225    3758 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-286000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-286000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-286000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 10:02:42.959625    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 10:02:42.959641    3758 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19461-1276/.minikube CaCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19461-1276/.minikube}
	I0816 10:02:42.959649    3758 buildroot.go:174] setting up certificates
	I0816 10:02:42.959661    3758 provision.go:84] configureAuth start
	I0816 10:02:42.959666    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetMachineName
	I0816 10:02:42.959803    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetIP
	I0816 10:02:42.959899    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:42.959980    3758 provision.go:143] copyHostCerts
	I0816 10:02:42.960007    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem
	I0816 10:02:42.960055    3758 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem, removing ...
	I0816 10:02:42.960061    3758 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem
	I0816 10:02:42.960954    3758 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem (1078 bytes)
	I0816 10:02:42.961160    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem
	I0816 10:02:42.961190    3758 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem, removing ...
	I0816 10:02:42.961195    3758 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem
	I0816 10:02:42.961273    3758 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem (1123 bytes)
	I0816 10:02:42.961439    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem
	I0816 10:02:42.961509    3758 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem, removing ...
	I0816 10:02:42.961515    3758 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem
	I0816 10:02:42.961594    3758 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem (1679 bytes)
	I0816 10:02:42.961746    3758 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem org=jenkins.ha-286000-m02 san=[127.0.0.1 192.169.0.6 ha-286000-m02 localhost minikube]
	I0816 10:02:43.093511    3758 provision.go:177] copyRemoteCerts
	I0816 10:02:43.093556    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 10:02:43.093572    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:43.093719    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:43.093818    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:43.093917    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:43.094005    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/id_rsa Username:docker}
	I0816 10:02:43.132229    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0816 10:02:43.132299    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0816 10:02:43.151814    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0816 10:02:43.151873    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0816 10:02:43.171464    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0816 10:02:43.171532    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 10:02:43.191045    3758 provision.go:87] duration metric: took 231.381757ms to configureAuth
	I0816 10:02:43.191057    3758 buildroot.go:189] setting minikube options for container-runtime
	I0816 10:02:43.191187    3758 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:02:43.191200    3758 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:02:43.191321    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:43.191410    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:43.191497    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:43.191580    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:43.191658    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:43.191759    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:43.191891    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:02:43.191898    3758 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0816 10:02:43.256733    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0816 10:02:43.256744    3758 buildroot.go:70] root file system type: tmpfs
	I0816 10:02:43.256823    3758 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0816 10:02:43.256835    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:43.256961    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:43.257048    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:43.257142    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:43.257220    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:43.257364    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:43.257505    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:02:43.257553    3758 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0816 10:02:43.330709    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0816 10:02:43.330729    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:43.330874    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:43.330977    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:43.331073    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:43.331167    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:43.331293    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:43.331440    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:02:43.331451    3758 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0816 10:02:44.884634    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0816 10:02:44.884648    3758 main.go:141] libmachine: Checking connection to Docker...
	I0816 10:02:44.884655    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetURL
	I0816 10:02:44.884793    3758 main.go:141] libmachine: Docker is up and running!
	I0816 10:02:44.884799    3758 main.go:141] libmachine: Reticulating splines...
	I0816 10:02:44.884804    3758 client.go:171] duration metric: took 14.112778669s to LocalClient.Create
	I0816 10:02:44.884820    3758 start.go:167] duration metric: took 14.112810851s to libmachine.API.Create "ha-286000"
	I0816 10:02:44.884826    3758 start.go:293] postStartSetup for "ha-286000-m02" (driver="hyperkit")
	I0816 10:02:44.884832    3758 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 10:02:44.884843    3758 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:02:44.884991    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 10:02:44.885004    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:44.885101    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:44.885204    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:44.885335    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:44.885428    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/id_rsa Username:docker}
	I0816 10:02:44.925701    3758 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 10:02:44.929599    3758 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 10:02:44.929613    3758 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19461-1276/.minikube/addons for local assets ...
	I0816 10:02:44.929722    3758 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19461-1276/.minikube/files for local assets ...
	I0816 10:02:44.929908    3758 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> 18312.pem in /etc/ssl/certs
	I0816 10:02:44.929914    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> /etc/ssl/certs/18312.pem
	I0816 10:02:44.930119    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 10:02:44.940484    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem --> /etc/ssl/certs/18312.pem (1708 bytes)
	I0816 10:02:44.973200    3758 start.go:296] duration metric: took 88.36731ms for postStartSetup
	I0816 10:02:44.973230    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetConfigRaw
	I0816 10:02:44.973848    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetIP
	I0816 10:02:44.973996    3758 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:02:44.974351    3758 start.go:128] duration metric: took 14.23599979s to createHost
	I0816 10:02:44.974366    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:44.974448    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:44.974568    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:44.974660    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:44.974744    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:44.974844    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:02:44.974966    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:02:44.974973    3758 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 10:02:45.036267    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723827764.932365297
	
	I0816 10:02:45.036285    3758 fix.go:216] guest clock: 1723827764.932365297
	I0816 10:02:45.036291    3758 fix.go:229] Guest: 2024-08-16 10:02:44.932365297 -0700 PDT Remote: 2024-08-16 10:02:44.97436 -0700 PDT m=+56.899083401 (delta=-41.994703ms)
	I0816 10:02:45.036301    3758 fix.go:200] guest clock delta is within tolerance: -41.994703ms
	I0816 10:02:45.036306    3758 start.go:83] releasing machines lock for "ha-286000-m02", held for 14.298063452s
	I0816 10:02:45.036338    3758 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:02:45.036468    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetIP
	I0816 10:02:45.062334    3758 out.go:177] * Found network options:
	I0816 10:02:45.105996    3758 out.go:177]   - NO_PROXY=192.169.0.5
	W0816 10:02:45.128174    3758 proxy.go:119] fail to check proxy env: Error ip not in block
	I0816 10:02:45.128216    3758 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:02:45.128829    3758 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:02:45.128969    3758 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:02:45.129060    3758 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 10:02:45.129088    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	W0816 10:02:45.129115    3758 proxy.go:119] fail to check proxy env: Error ip not in block
	I0816 10:02:45.129222    3758 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0816 10:02:45.129232    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:45.129238    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:02:45.129370    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:45.129393    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:02:45.129500    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:45.129515    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:02:45.129631    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/id_rsa Username:docker}
	I0816 10:02:45.129647    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:02:45.129727    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/id_rsa Username:docker}
	W0816 10:02:45.164265    3758 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 10:02:45.164324    3758 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 10:02:45.211468    3758 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 10:02:45.211489    3758 start.go:495] detecting cgroup driver to use...
	I0816 10:02:45.211595    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 10:02:45.228144    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0816 10:02:45.237310    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0816 10:02:45.246272    3758 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0816 10:02:45.246316    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0816 10:02:45.255987    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 10:02:45.265400    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0816 10:02:45.274661    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 10:02:45.283628    3758 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 10:02:45.292648    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0816 10:02:45.302207    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0816 10:02:45.311314    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0816 10:02:45.320414    3758 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 10:02:45.328532    3758 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 10:02:45.337229    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:45.444954    3758 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0816 10:02:45.464569    3758 start.go:495] detecting cgroup driver to use...
	I0816 10:02:45.464652    3758 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0816 10:02:45.488292    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 10:02:45.500162    3758 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 10:02:45.561835    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 10:02:45.572027    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 10:02:45.582122    3758 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0816 10:02:45.603011    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 10:02:45.613488    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 10:02:45.628434    3758 ssh_runner.go:195] Run: which cri-dockerd
	I0816 10:02:45.631358    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0816 10:02:45.638496    3758 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0816 10:02:45.652676    3758 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0816 10:02:45.755971    3758 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0816 10:02:45.860533    3758 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0816 10:02:45.860559    3758 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0816 10:02:45.874570    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:45.976095    3758 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0816 10:02:48.459358    3758 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.483288325s)
	I0816 10:02:48.459417    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0816 10:02:48.469882    3758 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0816 10:02:48.484429    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0816 10:02:48.495038    3758 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0816 10:02:48.597129    3758 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0816 10:02:48.691412    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:48.797592    3758 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0816 10:02:48.812075    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0816 10:02:48.823307    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:48.918652    3758 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0816 10:02:48.980191    3758 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0816 10:02:48.980257    3758 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0816 10:02:48.984954    3758 start.go:563] Will wait 60s for crictl version
	I0816 10:02:48.985011    3758 ssh_runner.go:195] Run: which crictl
	I0816 10:02:48.988185    3758 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 10:02:49.015262    3758 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0816 10:02:49.015344    3758 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0816 10:02:49.032341    3758 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0816 10:02:49.078128    3758 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0816 10:02:49.137645    3758 out.go:177]   - env NO_PROXY=192.169.0.5
	I0816 10:02:49.176661    3758 main.go:141] libmachine: (ha-286000-m02) Calling .GetIP
	I0816 10:02:49.177129    3758 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0816 10:02:49.181778    3758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 10:02:49.191522    3758 mustload.go:65] Loading cluster: ha-286000
	I0816 10:02:49.191665    3758 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:02:49.191885    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:02:49.191909    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:02:49.200721    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51064
	I0816 10:02:49.201069    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:02:49.201413    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:02:49.201424    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:02:49.201648    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:02:49.201776    3758 main.go:141] libmachine: (ha-286000) Calling .GetState
	I0816 10:02:49.201862    3758 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:02:49.201960    3758 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:02:49.202920    3758 host.go:66] Checking if "ha-286000" exists ...
	I0816 10:02:49.203188    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:02:49.203205    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:02:49.211926    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51066
	I0816 10:02:49.212258    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:02:49.212635    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:02:49.212652    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:02:49.212860    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:02:49.212967    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:49.213056    3758 certs.go:68] Setting up /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000 for IP: 192.169.0.6
	I0816 10:02:49.213061    3758 certs.go:194] generating shared ca certs ...
	I0816 10:02:49.213071    3758 certs.go:226] acquiring lock for ca certs: {Name:mka549fbc467849834d2267da204028c49d88a3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:49.213229    3758 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key
	I0816 10:02:49.213305    3758 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key
	I0816 10:02:49.213314    3758 certs.go:256] generating profile certs ...
	I0816 10:02:49.213406    3758 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.key
	I0816 10:02:49.213429    3758 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.5eb44438
	I0816 10:02:49.213443    3758 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.5eb44438 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.254]
	I0816 10:02:49.263058    3758 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.5eb44438 ...
	I0816 10:02:49.263077    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.5eb44438: {Name:mk266d5e842df442c104f85113e06d171d5a89ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:49.263395    3758 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.5eb44438 ...
	I0816 10:02:49.263404    3758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.5eb44438: {Name:mk1428912a4c1edaafe55f9bff302c19ad337785 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:02:49.263623    3758 certs.go:381] copying /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.5eb44438 -> /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt
	I0816 10:02:49.263846    3758 certs.go:385] copying /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.5eb44438 -> /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key
	I0816 10:02:49.264093    3758 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key
	I0816 10:02:49.264103    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0816 10:02:49.264125    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0816 10:02:49.264143    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0816 10:02:49.264163    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0816 10:02:49.264180    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0816 10:02:49.264199    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0816 10:02:49.264223    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0816 10:02:49.264242    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0816 10:02:49.264331    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem (1338 bytes)
	W0816 10:02:49.264378    3758 certs.go:480] ignoring /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831_empty.pem, impossibly tiny 0 bytes
	I0816 10:02:49.264386    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 10:02:49.264418    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem (1078 bytes)
	I0816 10:02:49.264449    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem (1123 bytes)
	I0816 10:02:49.264477    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem (1679 bytes)
	I0816 10:02:49.264545    3758 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem (1708 bytes)
	I0816 10:02:49.264593    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> /usr/share/ca-certificates/18312.pem
	I0816 10:02:49.264614    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:02:49.264634    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem -> /usr/share/ca-certificates/1831.pem
	I0816 10:02:49.264663    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:49.264814    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:49.264897    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:49.265014    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:49.265108    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:02:49.296192    3758 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0816 10:02:49.299577    3758 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0816 10:02:49.312765    3758 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0816 10:02:49.316551    3758 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0816 10:02:49.332167    3758 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0816 10:02:49.336199    3758 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0816 10:02:49.349166    3758 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0816 10:02:49.354242    3758 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0816 10:02:49.364119    3758 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0816 10:02:49.367349    3758 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0816 10:02:49.375423    3758 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0816 10:02:49.378717    3758 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0816 10:02:49.386918    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 10:02:49.408036    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0816 10:02:49.428163    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 10:02:49.448952    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 10:02:49.468986    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0816 10:02:49.489674    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0816 10:02:49.509597    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 10:02:49.530175    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 10:02:49.550578    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem --> /usr/share/ca-certificates/18312.pem (1708 bytes)
	I0816 10:02:49.571266    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 10:02:49.590995    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem --> /usr/share/ca-certificates/1831.pem (1338 bytes)
	I0816 10:02:49.611766    3758 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0816 10:02:49.625222    3758 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0816 10:02:49.638852    3758 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0816 10:02:49.653497    3758 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0816 10:02:49.667098    3758 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0816 10:02:49.680935    3758 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0816 10:02:49.695437    3758 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0816 10:02:49.708684    3758 ssh_runner.go:195] Run: openssl version
	I0816 10:02:49.712979    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18312.pem && ln -fs /usr/share/ca-certificates/18312.pem /etc/ssl/certs/18312.pem"
	I0816 10:02:49.721565    3758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18312.pem
	I0816 10:02:49.725513    3758 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 16:57 /usr/share/ca-certificates/18312.pem
	I0816 10:02:49.725580    3758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18312.pem
	I0816 10:02:49.730364    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18312.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 10:02:49.739184    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 10:02:49.747494    3758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:02:49.750923    3758 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 16:48 /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:02:49.750955    3758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:02:49.755195    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 10:02:49.763703    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1831.pem && ln -fs /usr/share/ca-certificates/1831.pem /etc/ssl/certs/1831.pem"
	I0816 10:02:49.772914    3758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1831.pem
	I0816 10:02:49.776377    3758 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 16:57 /usr/share/ca-certificates/1831.pem
	I0816 10:02:49.776421    3758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1831.pem
	I0816 10:02:49.780731    3758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1831.pem /etc/ssl/certs/51391683.0"
	I0816 10:02:49.789078    3758 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 10:02:49.792242    3758 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0816 10:02:49.792277    3758 kubeadm.go:934] updating node {m02 192.169.0.6 8443 v1.31.0 docker true true} ...
	I0816 10:02:49.792332    3758 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-286000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-286000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 10:02:49.792349    3758 kube-vip.go:115] generating kube-vip config ...
	I0816 10:02:49.792381    3758 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0816 10:02:49.804469    3758 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0816 10:02:49.804511    3758 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0816 10:02:49.804567    3758 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 10:02:49.812921    3758 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0816 10:02:49.812983    3758 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0816 10:02:49.821031    3758 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256 -> /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubelet
	I0816 10:02:49.821035    3758 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256 -> /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubeadm
	I0816 10:02:49.821039    3758 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubectl
	I0816 10:02:51.911279    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0816 10:02:51.911374    3758 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0816 10:02:51.915115    3758 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0816 10:02:51.915150    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0816 10:02:52.537289    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0816 10:02:52.537383    3758 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0816 10:02:52.540898    3758 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0816 10:02:52.540929    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0816 10:02:53.267467    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 10:02:53.278589    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0816 10:02:53.278731    3758 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0816 10:02:53.282055    3758 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0816 10:02:53.282073    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0816 10:02:53.498714    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0816 10:02:53.506112    3758 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0816 10:02:53.519672    3758 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 10:02:53.533551    3758 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0816 10:02:53.548024    3758 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0816 10:02:53.551124    3758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 10:02:53.560470    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:02:53.659270    3758 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 10:02:53.674625    3758 host.go:66] Checking if "ha-286000" exists ...
	I0816 10:02:53.674900    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:02:53.674923    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:02:53.683891    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51093
	I0816 10:02:53.684256    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:02:53.684641    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:02:53.684658    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:02:53.684885    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:02:53.685003    3758 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:02:53.685084    3758 start.go:317] joinCluster: &{Name:ha-286000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clu
sterName:ha-286000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpira
tion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 10:02:53.685155    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0816 10:02:53.685167    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:02:53.685248    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:02:53.685339    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:02:53.685423    3758 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:02:53.685503    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:02:53.765911    3758 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:02:53.765972    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token mpmdnf.6rzc0wpb01e0t6lz --discovery-token-ca-cert-hash sha256:995590413ea712c4f7db2cf849d28264ebc291e6cd53d741ce1d22c1daaa1602 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-286000-m02 --control-plane --apiserver-advertise-address=192.169.0.6 --apiserver-bind-port=8443"
	I0816 10:03:22.070391    3758 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token mpmdnf.6rzc0wpb01e0t6lz --discovery-token-ca-cert-hash sha256:995590413ea712c4f7db2cf849d28264ebc291e6cd53d741ce1d22c1daaa1602 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-286000-m02 --control-plane --apiserver-advertise-address=192.169.0.6 --apiserver-bind-port=8443": (28.304928658s)
	I0816 10:03:22.070414    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0816 10:03:22.427732    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-286000-m02 minikube.k8s.io/updated_at=2024_08_16T10_03_22_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd minikube.k8s.io/name=ha-286000 minikube.k8s.io/primary=false
	I0816 10:03:22.531211    3758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-286000-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0816 10:03:22.629050    3758 start.go:319] duration metric: took 28.944509117s to joinCluster
	I0816 10:03:22.629105    3758 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:03:22.629360    3758 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:03:22.652598    3758 out.go:177] * Verifying Kubernetes components...
	I0816 10:03:22.726472    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:03:22.952731    3758 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 10:03:22.975401    3758 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19461-1276/kubeconfig
	I0816 10:03:22.975621    3758 kapi.go:59] client config for ha-286000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.key", CAFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xa202f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0816 10:03:22.975663    3758 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0816 10:03:22.975836    3758 node_ready.go:35] waiting up to 6m0s for node "ha-286000-m02" to be "Ready" ...
	I0816 10:03:22.975886    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:22.975891    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:22.975897    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:22.975901    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:22.983180    3758 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0816 10:03:23.476314    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:23.476365    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:23.476380    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:23.476394    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:23.502874    3758 round_trippers.go:574] Response Status: 200 OK in 26 milliseconds
	I0816 10:03:23.977290    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:23.977304    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:23.977311    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:23.977315    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:23.979495    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:24.477835    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:24.477851    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:24.477858    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:24.477861    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:24.480701    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:24.977514    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:24.977568    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:24.977580    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:24.977588    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:24.980856    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:24.981299    3758 node_ready.go:53] node "ha-286000-m02" has status "Ready":"False"
	I0816 10:03:25.476235    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:25.476252    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:25.476260    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:25.476277    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:25.478567    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:25.976418    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:25.976469    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:25.976477    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:25.976480    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:25.978730    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:26.476423    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:26.476439    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:26.476445    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:26.476448    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:26.478424    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:26.975919    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:26.975934    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:26.975940    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:26.975945    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:26.977809    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:27.475966    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:27.475981    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:27.475987    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:27.475990    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:27.477769    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:27.478121    3758 node_ready.go:53] node "ha-286000-m02" has status "Ready":"False"
	I0816 10:03:27.977123    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:27.977139    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:27.977146    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:27.977150    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:27.979266    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:28.477140    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:28.477226    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:28.477254    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:28.477264    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:28.480386    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:28.977368    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:28.977407    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:28.977420    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:28.977427    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:28.980479    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:29.477521    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:29.477545    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:29.477556    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:29.477565    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:29.481179    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:29.481510    3758 node_ready.go:53] node "ha-286000-m02" has status "Ready":"False"
	I0816 10:03:29.975906    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:29.975932    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:29.975943    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:29.975949    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:29.978633    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:30.477171    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:30.477189    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:30.477198    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:30.477201    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:30.480005    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:30.977947    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:30.977973    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:30.977993    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:30.977998    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:30.982219    3758 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 10:03:31.476252    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:31.476327    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:31.476341    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:31.476347    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:31.479417    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:31.975987    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:31.976012    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:31.976023    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:31.976029    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:31.979409    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:31.979880    3758 node_ready.go:53] node "ha-286000-m02" has status "Ready":"False"
	I0816 10:03:32.477180    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:32.477207    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:32.477229    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:32.477240    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:32.480686    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:32.976225    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:32.976250    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:32.976261    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:32.976269    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:32.979533    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:33.475956    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:33.475980    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:33.475991    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:33.475997    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:33.479025    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:33.977111    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:33.977185    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:33.977198    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:33.977204    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:33.980426    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:33.980922    3758 node_ready.go:53] node "ha-286000-m02" has status "Ready":"False"
	I0816 10:03:34.476455    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:34.476470    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:34.476477    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:34.476481    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:34.478517    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:34.976996    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:34.977037    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:34.977049    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:34.977084    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:34.980500    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:35.476045    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:35.476068    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:35.476079    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:35.476086    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:35.479058    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:35.975920    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:35.975944    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:35.975956    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:35.975962    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:35.979071    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:36.477845    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:36.477872    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:36.477885    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:36.477946    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:36.481401    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:36.481890    3758 node_ready.go:53] node "ha-286000-m02" has status "Ready":"False"
	I0816 10:03:36.977033    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:36.977057    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:36.977069    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:36.977076    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:36.980476    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:37.477095    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:37.477120    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:37.477133    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:37.477141    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:37.480485    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:37.976303    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:37.976320    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:37.976343    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:37.976348    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:37.978551    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:38.476492    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:38.476517    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.476528    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.476536    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:38.480190    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:38.976091    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:38.976114    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.976124    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.976129    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:38.979741    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:38.980051    3758 node_ready.go:49] node "ha-286000-m02" has status "Ready":"True"
	I0816 10:03:38.980066    3758 node_ready.go:38] duration metric: took 16.004516347s for node "ha-286000-m02" to be "Ready" ...
	I0816 10:03:38.980077    3758 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 10:03:38.980127    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0816 10:03:38.980135    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.980142    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.980152    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:38.982937    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:38.987546    3758 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-2kqjf" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:38.987591    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-2kqjf
	I0816 10:03:38.987597    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.987603    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.987607    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:38.989339    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:38.989748    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:38.989758    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.989764    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.989768    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:38.991324    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:38.991660    3758 pod_ready.go:93] pod "coredns-6f6b679f8f-2kqjf" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:38.991671    3758 pod_ready.go:82] duration metric: took 4.111381ms for pod "coredns-6f6b679f8f-2kqjf" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:38.991678    3758 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-rfbz7" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:38.991708    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-rfbz7
	I0816 10:03:38.991713    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.991718    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.991721    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:38.993245    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:38.993624    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:38.993631    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.993640    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.993644    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:38.995095    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:38.995419    3758 pod_ready.go:93] pod "coredns-6f6b679f8f-rfbz7" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:38.995427    3758 pod_ready.go:82] duration metric: took 3.744646ms for pod "coredns-6f6b679f8f-rfbz7" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:38.995433    3758 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:38.995467    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-286000
	I0816 10:03:38.995472    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.995478    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.995482    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:38.997007    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:38.997390    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:38.997397    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.997403    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.997406    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:38.998839    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:38.999151    3758 pod_ready.go:93] pod "etcd-ha-286000" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:38.999160    3758 pod_ready.go:82] duration metric: took 3.721879ms for pod "etcd-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:38.999165    3758 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:38.999202    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-286000-m02
	I0816 10:03:38.999207    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:38.999213    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:38.999217    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:39.001189    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:39.001547    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:39.001554    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:39.001559    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:39.001563    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:39.003004    3758 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:03:39.003290    3758 pod_ready.go:93] pod "etcd-ha-286000-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:39.003298    3758 pod_ready.go:82] duration metric: took 4.127582ms for pod "etcd-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:39.003307    3758 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:39.177834    3758 request.go:632] Waited for 174.443582ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-286000
	I0816 10:03:39.177948    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-286000
	I0816 10:03:39.177963    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:39.177974    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:39.177981    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:39.181371    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:39.376438    3758 request.go:632] Waited for 194.538822ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:39.376562    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:39.376574    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:39.376586    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:39.376596    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:39.379989    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:39.380425    3758 pod_ready.go:93] pod "kube-apiserver-ha-286000" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:39.380437    3758 pod_ready.go:82] duration metric: took 377.131323ms for pod "kube-apiserver-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:39.380446    3758 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:39.576633    3758 request.go:632] Waited for 196.095353ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-286000-m02
	I0816 10:03:39.576687    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-286000-m02
	I0816 10:03:39.576696    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:39.576769    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:39.576793    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:39.580164    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:39.776642    3758 request.go:632] Waited for 195.66937ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:39.776725    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:39.776735    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:39.776746    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:39.776752    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:39.780273    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:39.780714    3758 pod_ready.go:93] pod "kube-apiserver-ha-286000-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:39.780723    3758 pod_ready.go:82] duration metric: took 400.274102ms for pod "kube-apiserver-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:39.780730    3758 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:39.977598    3758 request.go:632] Waited for 196.714211ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-286000
	I0816 10:03:39.977650    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-286000
	I0816 10:03:39.977659    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:39.977670    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:39.977679    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:39.981079    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:40.177460    3758 request.go:632] Waited for 195.94045ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:40.177528    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:40.177589    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:40.177603    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:40.177609    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:40.180971    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:40.181992    3758 pod_ready.go:93] pod "kube-controller-manager-ha-286000" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:40.182036    3758 pod_ready.go:82] duration metric: took 401.277819ms for pod "kube-controller-manager-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:40.182047    3758 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:40.376258    3758 request.go:632] Waited for 194.111468ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-286000-m02
	I0816 10:03:40.376327    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-286000-m02
	I0816 10:03:40.376336    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:40.376347    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:40.376355    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:40.379476    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:40.576506    3758 request.go:632] Waited for 196.409077ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:40.576552    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:40.576560    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:40.576571    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:40.576580    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:40.579964    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:40.580641    3758 pod_ready.go:93] pod "kube-controller-manager-ha-286000-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:40.580654    3758 pod_ready.go:82] duration metric: took 398.607786ms for pod "kube-controller-manager-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:40.580663    3758 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pt669" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:40.778194    3758 request.go:632] Waited for 197.469682ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pt669
	I0816 10:03:40.778324    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pt669
	I0816 10:03:40.778333    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:40.778345    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:40.778352    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:40.781925    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:40.976623    3758 request.go:632] Waited for 194.04689ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:40.976752    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:40.976761    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:40.976772    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:40.976780    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:40.980022    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:40.980473    3758 pod_ready.go:93] pod "kube-proxy-pt669" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:40.980485    3758 pod_ready.go:82] duration metric: took 399.822578ms for pod "kube-proxy-pt669" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:40.980494    3758 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-w4nt2" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:41.177361    3758 request.go:632] Waited for 196.811255ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w4nt2
	I0816 10:03:41.177533    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w4nt2
	I0816 10:03:41.177545    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:41.177556    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:41.177563    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:41.181543    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:41.377692    3758 request.go:632] Waited for 195.583238ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:41.377791    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:41.377802    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:41.377812    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:41.377819    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:41.381134    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:41.381716    3758 pod_ready.go:93] pod "kube-proxy-w4nt2" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:41.381726    3758 pod_ready.go:82] duration metric: took 401.234529ms for pod "kube-proxy-w4nt2" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:41.381733    3758 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:41.577020    3758 request.go:632] Waited for 195.246357ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-286000
	I0816 10:03:41.577073    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-286000
	I0816 10:03:41.577125    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:41.577138    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:41.577147    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:41.580051    3758 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:03:41.777879    3758 request.go:632] Waited for 197.366145ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:41.777974    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:03:41.777983    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:41.777995    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:41.778003    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:41.781434    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:41.782012    3758 pod_ready.go:93] pod "kube-scheduler-ha-286000" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:41.782023    3758 pod_ready.go:82] duration metric: took 400.292624ms for pod "kube-scheduler-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:41.782051    3758 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:41.976356    3758 request.go:632] Waited for 194.23341ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-286000-m02
	I0816 10:03:41.976463    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-286000-m02
	I0816 10:03:41.976473    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:41.976485    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:41.976494    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:41.980088    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:42.176952    3758 request.go:632] Waited for 196.161737ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:42.177009    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:03:42.177018    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:42.177026    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:42.177083    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:42.182888    3758 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0816 10:03:42.183179    3758 pod_ready.go:93] pod "kube-scheduler-ha-286000-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 10:03:42.183188    3758 pod_ready.go:82] duration metric: took 401.133714ms for pod "kube-scheduler-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:03:42.183203    3758 pod_ready.go:39] duration metric: took 3.203173842s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 10:03:42.183221    3758 api_server.go:52] waiting for apiserver process to appear ...
	I0816 10:03:42.183274    3758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 10:03:42.195671    3758 api_server.go:72] duration metric: took 19.566910589s to wait for apiserver process to appear ...
	I0816 10:03:42.195682    3758 api_server.go:88] waiting for apiserver healthz status ...
	I0816 10:03:42.195698    3758 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0816 10:03:42.198837    3758 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0816 10:03:42.198870    3758 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0816 10:03:42.198874    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:42.198880    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:42.198884    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:42.199389    3758 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0816 10:03:42.199468    3758 api_server.go:141] control plane version: v1.31.0
	I0816 10:03:42.199478    3758 api_server.go:131] duration metric: took 3.791893ms to wait for apiserver health ...
	I0816 10:03:42.199486    3758 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 10:03:42.378048    3758 request.go:632] Waited for 178.500938ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0816 10:03:42.378156    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0816 10:03:42.378166    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:42.378178    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:42.378186    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:42.382776    3758 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 10:03:42.386068    3758 system_pods.go:59] 17 kube-system pods found
	I0816 10:03:42.386083    3758 system_pods.go:61] "coredns-6f6b679f8f-2kqjf" [be18cd09-3e3b-4749-bf29-7001a879f593] Running
	I0816 10:03:42.386087    3758 system_pods.go:61] "coredns-6f6b679f8f-rfbz7" [a593e27a-e38b-46c7-a603-44963c31c095] Running
	I0816 10:03:42.386090    3758 system_pods.go:61] "etcd-ha-286000" [c45bc623-befe-4e88-8da1-bb4f7d7be5b2] Running
	I0816 10:03:42.386093    3758 system_pods.go:61] "etcd-ha-286000-m02" [e14fbe0c-4527-4cbb-8c13-01c0b32a8d15] Running
	I0816 10:03:42.386096    3758 system_pods.go:61] "kindnet-t9kjf" [20c57f28-5e86-44a4-8a2a-07e1e240abf2] Running
	I0816 10:03:42.386099    3758 system_pods.go:61] "kindnet-whqxb" [1cc5291b-52ea-44b5-b87c-607d46b5281a] Running
	I0816 10:03:42.386102    3758 system_pods.go:61] "kube-apiserver-ha-286000" [c0d298a9-931b-4084-9e5f-01a88de27456] Running
	I0816 10:03:42.386104    3758 system_pods.go:61] "kube-apiserver-ha-286000-m02" [3666c131-2b88-483a-ab8c-b18cb8db7d6f] Running
	I0816 10:03:42.386120    3758 system_pods.go:61] "kube-controller-manager-ha-286000" [5a4f4c5d-d89a-4e5f-b022-479ca31f64cd] Running
	I0816 10:03:42.386127    3758 system_pods.go:61] "kube-controller-manager-ha-286000-m02" [a347c3d4-fa47-49ea-93a0-83087017a2bd] Running
	I0816 10:03:42.386130    3758 system_pods.go:61] "kube-proxy-pt669" [798c956f-8f08-465a-b0d9-90b65f703f80] Running
	I0816 10:03:42.386139    3758 system_pods.go:61] "kube-proxy-w4nt2" [79ce2248-f8fd-4a3b-aeb7-f81d7ad64564] Running
	I0816 10:03:42.386143    3758 system_pods.go:61] "kube-scheduler-ha-286000" [b4af80e0-cbb0-4257-a212-3585faff0875] Running
	I0816 10:03:42.386145    3758 system_pods.go:61] "kube-scheduler-ha-286000-m02" [89920e93-c959-47ec-8a2c-f2b63e8c3d3a] Running
	I0816 10:03:42.386153    3758 system_pods.go:61] "kube-vip-ha-286000" [b0f85be2-2afb-44b0-a701-161b24529349] Running
	I0816 10:03:42.386156    3758 system_pods.go:61] "kube-vip-ha-286000-m02" [4982dc53-946d-4f75-8e9e-452ef39ff2fa] Running
	I0816 10:03:42.386159    3758 system_pods.go:61] "storage-provisioner" [4805d53b-2db3-4092-a3f2-d4a854e93adc] Running
	I0816 10:03:42.386163    3758 system_pods.go:74] duration metric: took 186.675963ms to wait for pod list to return data ...
	I0816 10:03:42.386170    3758 default_sa.go:34] waiting for default service account to be created ...
	I0816 10:03:42.577030    3758 request.go:632] Waited for 190.795237ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0816 10:03:42.577183    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0816 10:03:42.577198    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:42.577209    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:42.577215    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:42.581020    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:42.581204    3758 default_sa.go:45] found service account: "default"
	I0816 10:03:42.581216    3758 default_sa.go:55] duration metric: took 195.045213ms for default service account to be created ...
	I0816 10:03:42.581224    3758 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 10:03:42.777160    3758 request.go:632] Waited for 195.848894ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0816 10:03:42.777252    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0816 10:03:42.777268    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:42.777285    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:42.777303    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:42.781637    3758 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 10:03:42.784962    3758 system_pods.go:86] 17 kube-system pods found
	I0816 10:03:42.784974    3758 system_pods.go:89] "coredns-6f6b679f8f-2kqjf" [be18cd09-3e3b-4749-bf29-7001a879f593] Running
	I0816 10:03:42.784978    3758 system_pods.go:89] "coredns-6f6b679f8f-rfbz7" [a593e27a-e38b-46c7-a603-44963c31c095] Running
	I0816 10:03:42.784981    3758 system_pods.go:89] "etcd-ha-286000" [c45bc623-befe-4e88-8da1-bb4f7d7be5b2] Running
	I0816 10:03:42.784984    3758 system_pods.go:89] "etcd-ha-286000-m02" [e14fbe0c-4527-4cbb-8c13-01c0b32a8d15] Running
	I0816 10:03:42.784987    3758 system_pods.go:89] "kindnet-t9kjf" [20c57f28-5e86-44a4-8a2a-07e1e240abf2] Running
	I0816 10:03:42.784990    3758 system_pods.go:89] "kindnet-whqxb" [1cc5291b-52ea-44b5-b87c-607d46b5281a] Running
	I0816 10:03:42.784992    3758 system_pods.go:89] "kube-apiserver-ha-286000" [c0d298a9-931b-4084-9e5f-01a88de27456] Running
	I0816 10:03:42.784995    3758 system_pods.go:89] "kube-apiserver-ha-286000-m02" [3666c131-2b88-483a-ab8c-b18cb8db7d6f] Running
	I0816 10:03:42.784997    3758 system_pods.go:89] "kube-controller-manager-ha-286000" [5a4f4c5d-d89a-4e5f-b022-479ca31f64cd] Running
	I0816 10:03:42.785001    3758 system_pods.go:89] "kube-controller-manager-ha-286000-m02" [a347c3d4-fa47-49ea-93a0-83087017a2bd] Running
	I0816 10:03:42.785003    3758 system_pods.go:89] "kube-proxy-pt669" [798c956f-8f08-465a-b0d9-90b65f703f80] Running
	I0816 10:03:42.785006    3758 system_pods.go:89] "kube-proxy-w4nt2" [79ce2248-f8fd-4a3b-aeb7-f81d7ad64564] Running
	I0816 10:03:42.785009    3758 system_pods.go:89] "kube-scheduler-ha-286000" [b4af80e0-cbb0-4257-a212-3585faff0875] Running
	I0816 10:03:42.785011    3758 system_pods.go:89] "kube-scheduler-ha-286000-m02" [89920e93-c959-47ec-8a2c-f2b63e8c3d3a] Running
	I0816 10:03:42.785015    3758 system_pods.go:89] "kube-vip-ha-286000" [b0f85be2-2afb-44b0-a701-161b24529349] Running
	I0816 10:03:42.785017    3758 system_pods.go:89] "kube-vip-ha-286000-m02" [4982dc53-946d-4f75-8e9e-452ef39ff2fa] Running
	I0816 10:03:42.785023    3758 system_pods.go:89] "storage-provisioner" [4805d53b-2db3-4092-a3f2-d4a854e93adc] Running
	I0816 10:03:42.785028    3758 system_pods.go:126] duration metric: took 203.80313ms to wait for k8s-apps to be running ...
	I0816 10:03:42.785034    3758 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 10:03:42.785088    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 10:03:42.796381    3758 system_svc.go:56] duration metric: took 11.343439ms WaitForService to wait for kubelet
	I0816 10:03:42.796397    3758 kubeadm.go:582] duration metric: took 20.16764951s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 10:03:42.796409    3758 node_conditions.go:102] verifying NodePressure condition ...
	I0816 10:03:42.977594    3758 request.go:632] Waited for 181.143005ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0816 10:03:42.977695    3758 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0816 10:03:42.977706    3758 round_trippers.go:469] Request Headers:
	I0816 10:03:42.977719    3758 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:03:42.977724    3758 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:03:42.981373    3758 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:03:42.981988    3758 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 10:03:42.982013    3758 node_conditions.go:123] node cpu capacity is 2
	I0816 10:03:42.982039    3758 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 10:03:42.982043    3758 node_conditions.go:123] node cpu capacity is 2
	I0816 10:03:42.982046    3758 node_conditions.go:105] duration metric: took 185.637747ms to run NodePressure ...
	I0816 10:03:42.982055    3758 start.go:241] waiting for startup goroutines ...
	I0816 10:03:42.982079    3758 start.go:255] writing updated cluster config ...
	I0816 10:03:43.003740    3758 out.go:201] 
	I0816 10:03:43.024885    3758 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:03:43.024976    3758 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:03:43.047776    3758 out.go:177] * Starting "ha-286000-m03" control-plane node in "ha-286000" cluster
	I0816 10:03:43.137621    3758 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 10:03:43.137655    3758 cache.go:56] Caching tarball of preloaded images
	I0816 10:03:43.137861    3758 preload.go:172] Found /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0816 10:03:43.137884    3758 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 10:03:43.138022    3758 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:03:43.159475    3758 start.go:360] acquireMachinesLock for ha-286000-m03: {Name:mk0510f0432b1e24fd07d899155e85dff8756d5a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:03:43.159592    3758 start.go:364] duration metric: took 91.442µs to acquireMachinesLock for "ha-286000-m03"
	I0816 10:03:43.159625    3758 start.go:93] Provisioning new machine with config: &{Name:ha-286000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:ha-286000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ing
ress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:03:43.159736    3758 start.go:125] createHost starting for "m03" (driver="hyperkit")
	I0816 10:03:43.180569    3758 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0816 10:03:43.180719    3758 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:03:43.180762    3758 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:03:43.191087    3758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51098
	I0816 10:03:43.191558    3758 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:03:43.191889    3758 main.go:141] libmachine: Using API Version  1
	I0816 10:03:43.191899    3758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:03:43.192106    3758 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:03:43.192302    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetMachineName
	I0816 10:03:43.192394    3758 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:03:43.192506    3758 start.go:159] libmachine.API.Create for "ha-286000" (driver="hyperkit")
	I0816 10:03:43.192526    3758 client.go:168] LocalClient.Create starting
	I0816 10:03:43.192559    3758 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem
	I0816 10:03:43.192606    3758 main.go:141] libmachine: Decoding PEM data...
	I0816 10:03:43.192620    3758 main.go:141] libmachine: Parsing certificate...
	I0816 10:03:43.192675    3758 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem
	I0816 10:03:43.192704    3758 main.go:141] libmachine: Decoding PEM data...
	I0816 10:03:43.192714    3758 main.go:141] libmachine: Parsing certificate...
	I0816 10:03:43.192726    3758 main.go:141] libmachine: Running pre-create checks...
	I0816 10:03:43.192731    3758 main.go:141] libmachine: (ha-286000-m03) Calling .PreCreateCheck
	I0816 10:03:43.192813    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:43.192833    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetConfigRaw
	I0816 10:03:43.202038    3758 main.go:141] libmachine: Creating machine...
	I0816 10:03:43.202062    3758 main.go:141] libmachine: (ha-286000-m03) Calling .Create
	I0816 10:03:43.202302    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:43.202574    3758 main.go:141] libmachine: (ha-286000-m03) DBG | I0816 10:03:43.202284    3848 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19461-1276/.minikube
	I0816 10:03:43.202705    3758 main.go:141] libmachine: (ha-286000-m03) Downloading /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19461-1276/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0816 10:03:43.529965    3758 main.go:141] libmachine: (ha-286000-m03) DBG | I0816 10:03:43.529902    3848 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/id_rsa...
	I0816 10:03:43.631057    3758 main.go:141] libmachine: (ha-286000-m03) DBG | I0816 10:03:43.630965    3848 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/ha-286000-m03.rawdisk...
	I0816 10:03:43.631074    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Writing magic tar header
	I0816 10:03:43.631083    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Writing SSH key tar header
	I0816 10:03:43.631711    3758 main.go:141] libmachine: (ha-286000-m03) DBG | I0816 10:03:43.631681    3848 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03 ...
	I0816 10:03:44.155270    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:44.155286    3758 main.go:141] libmachine: (ha-286000-m03) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/hyperkit.pid
	I0816 10:03:44.155318    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Using UUID 408efb18-ff91-428f-b414-6eb0d982a779
	I0816 10:03:44.181981    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Generated MAC 8a:e:de:5b:b5:8b
	I0816 10:03:44.182010    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000
	I0816 10:03:44.182059    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"408efb18-ff91-428f-b414-6eb0d982a779", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 10:03:44.182101    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"408efb18-ff91-428f-b414-6eb0d982a779", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 10:03:44.182228    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "408efb18-ff91-428f-b414-6eb0d982a779", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/ha-286000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/bzimage,/Users/jenkins/minikube-integration/19461-1276/.minikube/machine
s/ha-286000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000"}
	I0816 10:03:44.182306    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 408efb18-ff91-428f-b414-6eb0d982a779 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/ha-286000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/bzimage,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000"
	I0816 10:03:44.182332    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0816 10:03:44.185479    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 DEBUG: hyperkit: Pid is 3849
	I0816 10:03:44.185900    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Attempt 0
	I0816 10:03:44.185912    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:44.186025    3758 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid from json: 3849
	I0816 10:03:44.186994    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Searching for 8a:e:de:5b:b5:8b in /var/db/dhcpd_leases ...
	I0816 10:03:44.187062    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0816 10:03:44.187079    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0d7b0}
	I0816 10:03:44.187114    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:03:44.187142    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:03:44.187168    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:03:44.187185    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:03:44.193352    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0816 10:03:44.201781    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0816 10:03:44.202595    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:03:44.202610    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:03:44.202637    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:03:44.202653    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:03:44.588441    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0816 10:03:44.588458    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0816 10:03:44.703100    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:03:44.703117    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:03:44.703125    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:03:44.703135    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:03:44.703952    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0816 10:03:44.703963    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:44 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0816 10:03:46.188345    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Attempt 1
	I0816 10:03:46.188360    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:46.188480    3758 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid from json: 3849
	I0816 10:03:46.189255    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Searching for 8a:e:de:5b:b5:8b in /var/db/dhcpd_leases ...
	I0816 10:03:46.189306    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0816 10:03:46.189321    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0d7b0}
	I0816 10:03:46.189335    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:03:46.189352    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:03:46.189359    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:03:46.189385    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:03:48.190823    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Attempt 2
	I0816 10:03:48.190838    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:48.190916    3758 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid from json: 3849
	I0816 10:03:48.191692    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Searching for 8a:e:de:5b:b5:8b in /var/db/dhcpd_leases ...
	I0816 10:03:48.191747    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0816 10:03:48.191757    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0d7b0}
	I0816 10:03:48.191779    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:03:48.191787    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:03:48.191803    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:03:48.191815    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:03:50.191897    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Attempt 3
	I0816 10:03:50.191913    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:50.191977    3758 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid from json: 3849
	I0816 10:03:50.192744    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Searching for 8a:e:de:5b:b5:8b in /var/db/dhcpd_leases ...
	I0816 10:03:50.192793    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0816 10:03:50.192802    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0d7b0}
	I0816 10:03:50.192811    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:03:50.192820    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:03:50.192836    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:03:50.192850    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:03:50.310705    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:50 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0816 10:03:50.310770    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:50 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0816 10:03:50.310783    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:50 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0816 10:03:50.334054    3758 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:03:50 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0816 10:03:52.193443    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Attempt 4
	I0816 10:03:52.193459    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:52.193535    3758 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid from json: 3849
	I0816 10:03:52.194319    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Searching for 8a:e:de:5b:b5:8b in /var/db/dhcpd_leases ...
	I0816 10:03:52.194367    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0816 10:03:52.194379    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0d7b0}
	I0816 10:03:52.194390    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:03:52.194399    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:12:f0:50:8d:14:c9 ID:1,12:f0:50:8d:14:c9 Lease:0x66c0d68f}
	I0816 10:03:52.194408    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:40:9:4d:dc:28 ID:1,de:40:9:4d:dc:28 Lease:0x66bf846c}
	I0816 10:03:52.194419    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:3e:15:d0:68:43:a3 ID:1,3e:15:d0:68:43:a3 Lease:0x66c0d452}
	I0816 10:03:54.195434    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Attempt 5
	I0816 10:03:54.195456    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:54.195540    3758 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid from json: 3849
	I0816 10:03:54.196406    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Searching for 8a:e:de:5b:b5:8b in /var/db/dhcpd_leases ...
	I0816 10:03:54.196510    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Found 6 entries in /var/db/dhcpd_leases!
	I0816 10:03:54.196530    3758 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66c0d7f9}
	I0816 10:03:54.196551    3758 main.go:141] libmachine: (ha-286000-m03) DBG | Found match: 8a:e:de:5b:b5:8b
	I0816 10:03:54.196553    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetConfigRaw
	I0816 10:03:54.196565    3758 main.go:141] libmachine: (ha-286000-m03) DBG | IP: 192.169.0.7
	I0816 10:03:54.197236    3758 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:03:54.197351    3758 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:03:54.197441    3758 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0816 10:03:54.197454    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetState
	I0816 10:03:54.197541    3758 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:03:54.197611    3758 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid from json: 3849
	I0816 10:03:54.198439    3758 main.go:141] libmachine: Detecting operating system of created instance...
	I0816 10:03:54.198446    3758 main.go:141] libmachine: Waiting for SSH to be available...
	I0816 10:03:54.198450    3758 main.go:141] libmachine: Getting to WaitForSSH function...
	I0816 10:03:54.198454    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:54.198532    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:54.198612    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:54.198695    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:54.198783    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:54.198892    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:03:54.199063    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:03:54.199071    3758 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0816 10:03:55.266165    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 10:03:55.266179    3758 main.go:141] libmachine: Detecting the provisioner...
	I0816 10:03:55.266185    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:55.266318    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:55.266413    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.266507    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.266601    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:55.266731    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:03:55.266879    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:03:55.266886    3758 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0816 10:03:55.332778    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0816 10:03:55.332822    3758 main.go:141] libmachine: found compatible host: buildroot
	I0816 10:03:55.332828    3758 main.go:141] libmachine: Provisioning with buildroot...
	I0816 10:03:55.332834    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetMachineName
	I0816 10:03:55.332971    3758 buildroot.go:166] provisioning hostname "ha-286000-m03"
	I0816 10:03:55.332982    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetMachineName
	I0816 10:03:55.333079    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:55.333186    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:55.333277    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.333375    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.333463    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:55.333593    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:03:55.333737    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:03:55.333746    3758 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-286000-m03 && echo "ha-286000-m03" | sudo tee /etc/hostname
	I0816 10:03:55.411701    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-286000-m03
	
	I0816 10:03:55.411715    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:55.411851    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:55.411965    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.412057    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.412140    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:55.412274    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:03:55.412418    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:03:55.412429    3758 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-286000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-286000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-286000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 10:03:55.485102    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 10:03:55.485118    3758 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19461-1276/.minikube CaCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19461-1276/.minikube}
	I0816 10:03:55.485127    3758 buildroot.go:174] setting up certificates
	I0816 10:03:55.485134    3758 provision.go:84] configureAuth start
	I0816 10:03:55.485141    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetMachineName
	I0816 10:03:55.485284    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetIP
	I0816 10:03:55.485375    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:55.485445    3758 provision.go:143] copyHostCerts
	I0816 10:03:55.485474    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem
	I0816 10:03:55.485527    3758 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem, removing ...
	I0816 10:03:55.485533    3758 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem
	I0816 10:03:55.485654    3758 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem (1123 bytes)
	I0816 10:03:55.485864    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem
	I0816 10:03:55.485895    3758 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem, removing ...
	I0816 10:03:55.485900    3758 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem
	I0816 10:03:55.486003    3758 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem (1679 bytes)
	I0816 10:03:55.486172    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem
	I0816 10:03:55.486206    3758 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem, removing ...
	I0816 10:03:55.486215    3758 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem
	I0816 10:03:55.486286    3758 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem (1078 bytes)
	I0816 10:03:55.486435    3758 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem org=jenkins.ha-286000-m03 san=[127.0.0.1 192.169.0.7 ha-286000-m03 localhost minikube]
	I0816 10:03:55.572803    3758 provision.go:177] copyRemoteCerts
	I0816 10:03:55.572855    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 10:03:55.572869    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:55.573015    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:55.573114    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.573208    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:55.573304    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/id_rsa Username:docker}
	I0816 10:03:55.612104    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0816 10:03:55.612186    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0816 10:03:55.632484    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0816 10:03:55.632548    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 10:03:55.652677    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0816 10:03:55.652752    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0816 10:03:55.672555    3758 provision.go:87] duration metric: took 187.4165ms to configureAuth
	I0816 10:03:55.672568    3758 buildroot.go:189] setting minikube options for container-runtime
	I0816 10:03:55.672735    3758 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:03:55.672748    3758 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:03:55.672889    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:55.672982    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:55.673071    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.673157    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.673245    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:55.673371    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:03:55.673496    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:03:55.673504    3758 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0816 10:03:55.738053    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0816 10:03:55.738064    3758 buildroot.go:70] root file system type: tmpfs
	I0816 10:03:55.738156    3758 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0816 10:03:55.738167    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:55.738306    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:55.738397    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.738489    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.738573    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:55.738694    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:03:55.738841    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:03:55.738890    3758 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0816 10:03:55.813774    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0816 10:03:55.813790    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:55.813924    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:55.814019    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.814103    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:55.814193    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:55.814320    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:03:55.814470    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:03:55.814484    3758 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0816 10:03:57.356529    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0816 10:03:57.356544    3758 main.go:141] libmachine: Checking connection to Docker...
	I0816 10:03:57.356549    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetURL
	I0816 10:03:57.356692    3758 main.go:141] libmachine: Docker is up and running!
	I0816 10:03:57.356700    3758 main.go:141] libmachine: Reticulating splines...
	I0816 10:03:57.356705    3758 client.go:171] duration metric: took 14.164443579s to LocalClient.Create
	I0816 10:03:57.356717    3758 start.go:167] duration metric: took 14.164482312s to libmachine.API.Create "ha-286000"
	I0816 10:03:57.356727    3758 start.go:293] postStartSetup for "ha-286000-m03" (driver="hyperkit")
	I0816 10:03:57.356734    3758 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 10:03:57.356744    3758 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:03:57.356919    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 10:03:57.356935    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:57.357030    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:57.357130    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:57.357273    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:57.357390    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/id_rsa Username:docker}
	I0816 10:03:57.398231    3758 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 10:03:57.402472    3758 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 10:03:57.402483    3758 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19461-1276/.minikube/addons for local assets ...
	I0816 10:03:57.402571    3758 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19461-1276/.minikube/files for local assets ...
	I0816 10:03:57.402723    3758 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> 18312.pem in /etc/ssl/certs
	I0816 10:03:57.402729    3758 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> /etc/ssl/certs/18312.pem
	I0816 10:03:57.402895    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 10:03:57.412226    3758 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem --> /etc/ssl/certs/18312.pem (1708 bytes)
	I0816 10:03:57.443242    3758 start.go:296] duration metric: took 86.507779ms for postStartSetup
	I0816 10:03:57.443268    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetConfigRaw
	I0816 10:03:57.443871    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetIP
	I0816 10:03:57.444031    3758 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:03:57.444386    3758 start.go:128] duration metric: took 14.284913269s to createHost
	I0816 10:03:57.444401    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:57.444490    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:57.444568    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:57.444658    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:57.444732    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:57.444842    3758 main.go:141] libmachine: Using SSH client type: native
	I0816 10:03:57.444963    3758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8b49ea0] 0x8b4cc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:03:57.444971    3758 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 10:03:57.509634    3758 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723827837.586061636
	
	I0816 10:03:57.509648    3758 fix.go:216] guest clock: 1723827837.586061636
	I0816 10:03:57.509653    3758 fix.go:229] Guest: 2024-08-16 10:03:57.586061636 -0700 PDT Remote: 2024-08-16 10:03:57.444395 -0700 PDT m=+129.370490857 (delta=141.666636ms)
	I0816 10:03:57.509665    3758 fix.go:200] guest clock delta is within tolerance: 141.666636ms
	I0816 10:03:57.509669    3758 start.go:83] releasing machines lock for "ha-286000-m03", held for 14.350339851s
	I0816 10:03:57.509691    3758 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:03:57.509832    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetIP
	I0816 10:03:57.542277    3758 out.go:177] * Found network options:
	I0816 10:03:57.562266    3758 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6
	W0816 10:03:57.583040    3758 proxy.go:119] fail to check proxy env: Error ip not in block
	W0816 10:03:57.583073    3758 proxy.go:119] fail to check proxy env: Error ip not in block
	I0816 10:03:57.583092    3758 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:03:57.583964    3758 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:03:57.584310    3758 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:03:57.584469    3758 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 10:03:57.584522    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	W0816 10:03:57.584570    3758 proxy.go:119] fail to check proxy env: Error ip not in block
	W0816 10:03:57.584596    3758 proxy.go:119] fail to check proxy env: Error ip not in block
	I0816 10:03:57.584708    3758 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0816 10:03:57.584735    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:03:57.584813    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:57.584995    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:57.585023    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:03:57.585164    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:03:57.585195    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:57.585338    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/id_rsa Username:docker}
	I0816 10:03:57.585406    3758 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:03:57.585566    3758 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/id_rsa Username:docker}
	W0816 10:03:57.623481    3758 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 10:03:57.623541    3758 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 10:03:57.670278    3758 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 10:03:57.670298    3758 start.go:495] detecting cgroup driver to use...
	I0816 10:03:57.670408    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 10:03:57.686134    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0816 10:03:57.694681    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0816 10:03:57.703134    3758 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0816 10:03:57.703198    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0816 10:03:57.711736    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 10:03:57.720041    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0816 10:03:57.728421    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 10:03:57.737252    3758 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 10:03:57.746356    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0816 10:03:57.755117    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0816 10:03:57.763962    3758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0816 10:03:57.772639    3758 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 10:03:57.780684    3758 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 10:03:57.788938    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:03:57.893932    3758 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0816 10:03:57.913150    3758 start.go:495] detecting cgroup driver to use...
	I0816 10:03:57.913224    3758 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0816 10:03:57.932274    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 10:03:57.945000    3758 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 10:03:57.965222    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 10:03:57.977460    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 10:03:57.988259    3758 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0816 10:03:58.006552    3758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 10:03:58.017261    3758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 10:03:58.032545    3758 ssh_runner.go:195] Run: which cri-dockerd
	I0816 10:03:58.035464    3758 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0816 10:03:58.042742    3758 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0816 10:03:58.056747    3758 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0816 10:03:58.163471    3758 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0816 10:03:58.281020    3758 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0816 10:03:58.281044    3758 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0816 10:03:58.294992    3758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:03:58.399122    3758 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0816 10:04:59.415916    3758 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.017935847s)
	I0816 10:04:59.415981    3758 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0816 10:04:59.453653    3758 out.go:201] 
	W0816 10:04:59.474040    3758 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 16 17:03:56 ha-286000-m03 systemd[1]: Starting Docker Application Container Engine...
	Aug 16 17:03:56 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:56.200976866Z" level=info msg="Starting up"
	Aug 16 17:03:56 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:56.201455258Z" level=info msg="containerd not running, starting managed containerd"
	Aug 16 17:03:56 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:56.201908035Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=519
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.218394236Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.233514110Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.233584256Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.233654513Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.233690141Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.233768300Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.233806284Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.233955562Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.233996222Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.234026670Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.234054929Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.234137090Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.234382642Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.235934793Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.235985918Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.236117022Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.236159609Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.236256962Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.236327154Z" level=info msg="metadata content store policy set" policy=shared
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.238422470Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.238509436Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.238555940Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.238594343Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.238633954Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.238734892Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.238944917Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239083528Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239123172Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239153894Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239184820Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239214617Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239248728Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239285992Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239333696Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239368624Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239411950Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239451554Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239490333Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239523121Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239554681Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239589296Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239621390Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239653930Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239683266Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239712579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239742056Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239772828Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239801901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239830636Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239859570Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239893189Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239929555Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239960582Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.239991787Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240076099Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240121809Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240153088Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240182438Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240210918Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240240067Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240268586Z" level=info msg="NRI interface is disabled by configuration."
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240520719Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240609661Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240710485Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 16 17:03:56 ha-286000-m03 dockerd[519]: time="2024-08-16T17:03:56.240795617Z" level=info msg="containerd successfully booted in 0.023088s"
	Aug 16 17:03:57 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:57.221812848Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 16 17:03:57 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:57.228854621Z" level=info msg="Loading containers: start."
	Aug 16 17:03:57 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:57.312023793Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 16 17:03:57 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:57.390459780Z" level=info msg="Loading containers: done."
	Aug 16 17:03:57 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:57.404746652Z" level=info msg="Docker daemon" commit=f9522e5 containerd-snapshotter=false storage-driver=overlay2 version=27.1.2
	Aug 16 17:03:57 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:57.404864935Z" level=info msg="Daemon has completed initialization"
	Aug 16 17:03:57 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:57.430646618Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 16 17:03:57 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:57.430810646Z" level=info msg="API listen on [::]:2376"
	Aug 16 17:03:57 ha-286000-m03 systemd[1]: Started Docker Application Container Engine.
	Aug 16 17:03:58 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:58.488220464Z" level=info msg="Processing signal 'terminated'"
	Aug 16 17:03:58 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:58.489144852Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 16 17:03:58 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:58.489473398Z" level=info msg="Daemon shutdown complete"
	Aug 16 17:03:58 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:58.489515121Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 16 17:03:58 ha-286000-m03 dockerd[513]: time="2024-08-16T17:03:58.489527809Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 16 17:03:58 ha-286000-m03 systemd[1]: Stopping Docker Application Container Engine...
	Aug 16 17:03:59 ha-286000-m03 systemd[1]: docker.service: Deactivated successfully.
	Aug 16 17:03:59 ha-286000-m03 systemd[1]: Stopped Docker Application Container Engine.
	Aug 16 17:03:59 ha-286000-m03 systemd[1]: Starting Docker Application Container Engine...
	Aug 16 17:03:59 ha-286000-m03 dockerd[913]: time="2024-08-16T17:03:59.520198872Z" level=info msg="Starting up"
	Aug 16 17:04:59 ha-286000-m03 dockerd[913]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 16 17:04:59 ha-286000-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 16 17:04:59 ha-286000-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 16 17:04:59 ha-286000-m03 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0816 10:04:59.474111    3758 out.go:270] * 
	W0816 10:04:59.474938    3758 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 10:04:59.558192    3758 out.go:201] 
	
	
	==> Docker <==
	Aug 16 17:20:30 ha-286000 dockerd[1241]: time="2024-08-16T17:20:30.775879149Z" level=warning msg="cleaning up after shim disconnected" id=8d7a6d0f95379fb7bf40d7e00f814e7c0b51d525a2fc0c90e3b972ee3aae0551 namespace=moby
	Aug 16 17:20:30 ha-286000 dockerd[1241]: time="2024-08-16T17:20:30.775947513Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 16 17:20:30 ha-286000 dockerd[1235]: time="2024-08-16T17:20:30.778539517Z" level=info msg="ignoring event" container=8d7a6d0f95379fb7bf40d7e00f814e7c0b51d525a2fc0c90e3b972ee3aae0551 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 16 17:20:31 ha-286000 dockerd[1241]: time="2024-08-16T17:20:31.822099137Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 16 17:20:31 ha-286000 dockerd[1241]: time="2024-08-16T17:20:31.822416639Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 16 17:20:31 ha-286000 dockerd[1241]: time="2024-08-16T17:20:31.822509651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:20:31 ha-286000 dockerd[1241]: time="2024-08-16T17:20:31.822672860Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:21:33 ha-286000 dockerd[1235]: time="2024-08-16T17:21:33.020391689Z" level=info msg="ignoring event" container=2e6e9443b194b68b939fd043c6c026e742e6ca27acad8e28dcefec7d1c7e431d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 16 17:21:33 ha-286000 dockerd[1241]: time="2024-08-16T17:21:33.021388685Z" level=info msg="shim disconnected" id=2e6e9443b194b68b939fd043c6c026e742e6ca27acad8e28dcefec7d1c7e431d namespace=moby
	Aug 16 17:21:33 ha-286000 dockerd[1241]: time="2024-08-16T17:21:33.021632997Z" level=warning msg="cleaning up after shim disconnected" id=2e6e9443b194b68b939fd043c6c026e742e6ca27acad8e28dcefec7d1c7e431d namespace=moby
	Aug 16 17:21:33 ha-286000 dockerd[1241]: time="2024-08-16T17:21:33.021676830Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 16 17:21:35 ha-286000 dockerd[1235]: time="2024-08-16T17:21:35.908444379Z" level=info msg="ignoring event" container=a5da1871a366dba9a7d23ca85788081f2c2e6a83218ffa546097678f44554b82 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 16 17:21:35 ha-286000 dockerd[1241]: time="2024-08-16T17:21:35.908822201Z" level=info msg="shim disconnected" id=a5da1871a366dba9a7d23ca85788081f2c2e6a83218ffa546097678f44554b82 namespace=moby
	Aug 16 17:21:35 ha-286000 dockerd[1241]: time="2024-08-16T17:21:35.908872093Z" level=warning msg="cleaning up after shim disconnected" id=a5da1871a366dba9a7d23ca85788081f2c2e6a83218ffa546097678f44554b82 namespace=moby
	Aug 16 17:21:35 ha-286000 dockerd[1241]: time="2024-08-16T17:21:35.908880600Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 16 17:21:38 ha-286000 cri-dockerd[1134]: time="2024-08-16T17:21:38Z" level=error msg="error getting RW layer size for container ID '8d7a6d0f95379fb7bf40d7e00f814e7c0b51d525a2fc0c90e3b972ee3aae0551': Error response from daemon: No such container: 8d7a6d0f95379fb7bf40d7e00f814e7c0b51d525a2fc0c90e3b972ee3aae0551"
	Aug 16 17:21:38 ha-286000 cri-dockerd[1134]: time="2024-08-16T17:21:38Z" level=error msg="Set backoffDuration to : 1m0s for container ID '8d7a6d0f95379fb7bf40d7e00f814e7c0b51d525a2fc0c90e3b972ee3aae0551'"
	Aug 16 17:21:50 ha-286000 dockerd[1241]: time="2024-08-16T17:21:50.681258291Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 16 17:21:50 ha-286000 dockerd[1241]: time="2024-08-16T17:21:50.681315689Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 16 17:21:50 ha-286000 dockerd[1241]: time="2024-08-16T17:21:50.681325223Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:21:50 ha-286000 dockerd[1241]: time="2024-08-16T17:21:50.681413926Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:21:54 ha-286000 dockerd[1241]: time="2024-08-16T17:21:54.670958858Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 16 17:21:54 ha-286000 dockerd[1241]: time="2024-08-16T17:21:54.671197620Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 16 17:21:54 ha-286000 dockerd[1241]: time="2024-08-16T17:21:54.671225845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:21:54 ha-286000 dockerd[1241]: time="2024-08-16T17:21:54.671393274Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	63b366c951f2a       604f5db92eaa8                                                                                         About a minute ago   Running             kube-apiserver            3                   818ee6dafe6c9       kube-apiserver-ha-286000
	7f657edc1d3b8       38af8ddebf499                                                                                         About a minute ago   Running             kube-vip                  2                   69fba128b04a6       kube-vip-ha-286000
	2e6e9443b194b       604f5db92eaa8                                                                                         2 minutes ago        Exited              kube-apiserver            2                   818ee6dafe6c9       kube-apiserver-ha-286000
	0529825d87ca5       6e38f40d628db                                                                                         4 minutes ago        Running             storage-provisioner       2                   482990a4b00e6       storage-provisioner
	078fa65ce0cbb       6e38f40d628db                                                                                         4 minutes ago        Exited              storage-provisioner       1                   482990a4b00e6       storage-provisioner
	a5da1871a366d       38af8ddebf499                                                                                         4 minutes ago        Exited              kube-vip                  1                   69fba128b04a6       kube-vip-ha-286000
	5bbe7a8cc492e       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   17 minutes ago       Running             busybox                   0                   1873ade92edb9       busybox-7dff88458-dvmvk
	bcd7170b050a5       cbb01a7bd410d                                                                                         20 minutes ago       Running             coredns                   0                   452942e267927       coredns-6f6b679f8f-rfbz7
	60d3d03e297ce       cbb01a7bd410d                                                                                         20 minutes ago       Running             coredns                   0                   fbd84fb813c90       coredns-6f6b679f8f-2kqjf
	2677394dcd66f       kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166              20 minutes ago       Running             kindnet-cni               0                   afaf657e85c32       kindnet-whqxb
	81f6c96d46494       ad83b2ca7b09e                                                                                         20 minutes ago       Running             kube-proxy                0                   dfb822fc27b5e       kube-proxy-w4nt2
	8f5867ee99d9b       045733566833c                                                                                         20 minutes ago       Running             kube-controller-manager   0                   e87fd3e77384f       kube-controller-manager-ha-286000
	f7b2e9efdd94f       1766f54c897f0                                                                                         20 minutes ago       Running             kube-scheduler            0                   8e2b186980aff       kube-scheduler-ha-286000
	d8dadff6cec78       2e96e5913fc06                                                                                         20 minutes ago       Running             etcd                      0                   cdb14ff7d8896       etcd-ha-286000
	
	
	==> coredns [60d3d03e297c] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services)
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: Trace[1595166943]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (16-Aug-2024 17:19:50.818) (total time: 11830ms):
	Trace[1595166943]: ---"Objects listed" error:Unauthorized 11830ms (17:20:02.649)
	Trace[1595166943]: [11.830466351s] [11.830466351s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=2678": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=2678": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: Trace[852140040]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (16-Aug-2024 17:20:06.131) (total time: 10521ms):
	Trace[852140040]: ---"Objects listed" error:Unauthorized 10521ms (17:20:16.652)
	Trace[852140040]: [10.521589006s] [10.521589006s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope
	
	
	==> coredns [bcd7170b050a] <==
	Trace[1408803744]: ---"Objects listed" error:Unauthorized 12956ms (17:20:16.648)
	Trace[1408803744]: [12.956854777s] [12.956854777s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: Trace[1786059905]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (16-Aug-2024 17:20:05.425) (total time: 11223ms):
	Trace[1786059905]: ---"Objects listed" error:Unauthorized 11223ms (17:20:16.649)
	Trace[1786059905]: [11.223878813s] [11.223878813s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[ERROR] plugin/kubernetes: Unexpected error when reading response body: http2: server sent GOAWAY and closed the connection; LastStreamID=5, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: unexpected error when reading response body. Please retry. Original error: http2: server sent GOAWAY and closed the connection; LastStreamID=5, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: Trace[1902597424]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (16-Aug-2024 17:20:18.397) (total time: 12364ms):
	Trace[1902597424]: ---"Objects listed" error:unexpected error when reading response body. Please retry. Original error: http2: server sent GOAWAY and closed the connection; LastStreamID=5, ErrCode=NO_ERROR, debug="" 12364ms (17:20:30.761)
	Trace[1902597424]: [12.364669513s] [12.364669513s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: unexpected error when reading response body. Please retry. Original error: http2: server sent GOAWAY and closed the connection; LastStreamID=5, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: Unexpected error when reading response body: http2: server sent GOAWAY and closed the connection; LastStreamID=5, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: unexpected error when reading response body. Please retry. Original error: http2: server sent GOAWAY and closed the connection; LastStreamID=5, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: unexpected error when reading response body. Please retry. Original error: http2: server sent GOAWAY and closed the connection; LastStreamID=5, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	
	
	==> describe nodes <==
	Name:               ha-286000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-286000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd
	                    minikube.k8s.io/name=ha-286000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_16T10_02_26_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 17:02:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-286000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 17:22:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 17:22:31 +0000   Fri, 16 Aug 2024 17:22:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 17:22:31 +0000   Fri, 16 Aug 2024 17:22:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 17:22:31 +0000   Fri, 16 Aug 2024 17:22:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 17:22:31 +0000   Fri, 16 Aug 2024 17:22:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.5
	  Hostname:    ha-286000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 a56c711da9894c3c86f2a7af9fec2b53
	  System UUID:                ad96408c-0000-0000-89eb-d74e5b68d297
	  Boot ID:                    23e4d4eb-a603-45bf-aca4-fb0893407f5a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-dvmvk              0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 coredns-6f6b679f8f-2kqjf             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     20m
	  kube-system                 coredns-6f6b679f8f-rfbz7             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     20m
	  kube-system                 etcd-ha-286000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         20m
	  kube-system                 kindnet-whqxb                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      20m
	  kube-system                 kube-apiserver-ha-286000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-controller-manager-ha-286000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-proxy-w4nt2                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-scheduler-ha-286000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-vip-ha-286000                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 20m                  kube-proxy       
	  Normal  NodeAllocatableEnforced  20m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    20m (x8 over 20m)    kubelet          Node ha-286000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20m (x7 over 20m)    kubelet          Node ha-286000 status is now: NodeHasSufficientPID
	  Normal  Starting                 20m                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  20m (x8 over 20m)    kubelet          Node ha-286000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 20m                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  20m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           20m                  node-controller  Node ha-286000 event: Registered Node ha-286000 in Controller
	  Normal  RegisteredNode           19m                  node-controller  Node ha-286000 event: Registered Node ha-286000 in Controller
	  Normal  RegisteredNode           2m9s                 node-controller  Node ha-286000 event: Registered Node ha-286000 in Controller
	  Normal  NodeNotReady             46s (x2 over 2m11s)  node-controller  Node ha-286000 status is now: NodeNotReady
	  Normal  NodeHasSufficientMemory  25s (x3 over 20m)    kubelet          Node ha-286000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    25s (x3 over 20m)    kubelet          Node ha-286000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     25s (x3 over 20m)    kubelet          Node ha-286000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                25s (x3 over 20m)    kubelet          Node ha-286000 status is now: NodeReady
	
	
	Name:               ha-286000-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-286000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd
	                    minikube.k8s.io/name=ha-286000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_16T10_03_22_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 17:03:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-286000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 17:22:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 17:22:06 +0000   Fri, 16 Aug 2024 17:22:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 17:22:06 +0000   Fri, 16 Aug 2024 17:22:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 17:22:06 +0000   Fri, 16 Aug 2024 17:22:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 17:22:06 +0000   Fri, 16 Aug 2024 17:22:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.6
	  Hostname:    ha-286000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 8be69630bb6240d89e619bc6a94fcf7a
	  System UUID:                f7634935-0000-0000-a751-ac72999da031
	  Boot ID:                    250b0152-00f0-47c6-9426-f15d22e85825
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-k9m92                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 etcd-ha-286000-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         19m
	  kube-system                 kindnet-t9kjf                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      19m
	  kube-system                 kube-apiserver-ha-286000-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-controller-manager-ha-286000-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-proxy-pt669                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-scheduler-ha-286000-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-vip-ha-286000-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 19m                    kube-proxy       
	  Normal  Starting                 114s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)      kubelet          Node ha-286000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)      kubelet          Node ha-286000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)      kubelet          Node ha-286000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           19m                    node-controller  Node ha-286000-m02 event: Registered Node ha-286000-m02 in Controller
	  Normal  RegisteredNode           19m                    node-controller  Node ha-286000-m02 event: Registered Node ha-286000-m02 in Controller
	  Normal  Starting                 2m22s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m22s (x8 over 2m22s)  kubelet          Node ha-286000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m22s (x8 over 2m22s)  kubelet          Node ha-286000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m22s (x7 over 2m22s)  kubelet          Node ha-286000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m9s                   node-controller  Node ha-286000-m02 event: Registered Node ha-286000-m02 in Controller
	  Normal  NodeNotReady             51s                    node-controller  Node ha-286000-m02 status is now: NodeNotReady
	
	
	Name:               ha-286000-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-286000-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd
	                    minikube.k8s.io/name=ha-286000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_16T10_17_22_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 17:17:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-286000-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 17:22:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 17:22:27 +0000   Fri, 16 Aug 2024 17:22:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 17:22:27 +0000   Fri, 16 Aug 2024 17:22:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 17:22:27 +0000   Fri, 16 Aug 2024 17:22:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 17:22:27 +0000   Fri, 16 Aug 2024 17:22:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.8
	  Hostname:    ha-286000-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 324c7ca05f77443abc4e861a3d5a5224
	  System UUID:                9a6645c6-0000-0000-8cbd-49b6a6a0383b
	  Boot ID:                    839ab079-775d-4939-ac8e-9fb255ba29df
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-99xmp    0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kindnet-b9r6s              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m35s
	  kube-system                 kube-proxy-5qhgk           0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 5m28s                kube-proxy       
	  Normal  NodeAllocatableEnforced  5m36s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m33s                node-controller  Node ha-286000-m04 event: Registered Node ha-286000-m04 in Controller
	  Normal  RegisteredNode           5m32s                node-controller  Node ha-286000-m04 event: Registered Node ha-286000-m04 in Controller
	  Normal  RegisteredNode           2m9s                 node-controller  Node ha-286000-m04 event: Registered Node ha-286000-m04 in Controller
	  Normal  NodeNotReady             51s (x2 over 2m11s)  node-controller  Node ha-286000-m04 status is now: NodeNotReady
	  Normal  NodeHasSufficientMemory  29s (x4 over 5m36s)  kubelet          Node ha-286000-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29s (x4 over 5m36s)  kubelet          Node ha-286000-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29s (x4 over 5m36s)  kubelet          Node ha-286000-m04 status is now: NodeHasSufficientPID
	  Normal  NodeReady                29s (x3 over 5m13s)  kubelet          Node ha-286000-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +2.760553] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +1.351722] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000003] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Aug16 17:02] systemd-fstab-generator[519]: Ignoring "noauto" option for root device
	[  +0.114039] systemd-fstab-generator[531]: Ignoring "noauto" option for root device
	[  +1.207693] kauditd_printk_skb: 42 callbacks suppressed
	[  +0.611653] systemd-fstab-generator[786]: Ignoring "noauto" option for root device
	[  +0.265175] systemd-fstab-generator[846]: Ignoring "noauto" option for root device
	[  +0.101006] systemd-fstab-generator[858]: Ignoring "noauto" option for root device
	[  +0.123308] systemd-fstab-generator[872]: Ignoring "noauto" option for root device
	[  +2.497195] systemd-fstab-generator[1086]: Ignoring "noauto" option for root device
	[  +0.110299] systemd-fstab-generator[1098]: Ignoring "noauto" option for root device
	[  +0.095670] systemd-fstab-generator[1110]: Ignoring "noauto" option for root device
	[  +0.122299] systemd-fstab-generator[1126]: Ignoring "noauto" option for root device
	[  +3.636333] systemd-fstab-generator[1227]: Ignoring "noauto" option for root device
	[  +0.056190] kauditd_printk_skb: 233 callbacks suppressed
	[  +2.505796] systemd-fstab-generator[1479]: Ignoring "noauto" option for root device
	[  +3.634449] systemd-fstab-generator[1611]: Ignoring "noauto" option for root device
	[  +0.055756] kauditd_printk_skb: 70 callbacks suppressed
	[  +6.910973] systemd-fstab-generator[2107]: Ignoring "noauto" option for root device
	[  +0.079351] kauditd_printk_skb: 72 callbacks suppressed
	[  +9.264773] kauditd_printk_skb: 51 callbacks suppressed
	[Aug16 17:03] kauditd_printk_skb: 26 callbacks suppressed
	[Aug16 17:18] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [d8dadff6cec7] <==
	{"level":"info","ts":"2024-08-16T17:20:42.418171Z","caller":"traceutil/trace.go:171","msg":"trace[1194954975] range","detail":"{range_begin:/registry/serviceaccounts/; range_end:/registry/serviceaccounts0; }","duration":"10.174892515s","start":"2024-08-16T17:20:32.243277Z","end":"2024-08-16T17:20:42.418170Z","steps":["trace[1194954975] 'agreement among raft nodes before linearized reading'  (duration: 10.1748902s)"],"step_count":1}
	{"level":"info","ts":"2024-08-16T17:20:42.418180Z","caller":"traceutil/trace.go:171","msg":"trace[49282146] range","detail":"{range_begin:/registry/serviceaccounts/; range_end:/registry/serviceaccounts0; }","duration":"10.174919495s","start":"2024-08-16T17:20:32.243259Z","end":"2024-08-16T17:20:42.418179Z","steps":["trace[49282146] 'agreement among raft nodes before linearized reading'  (duration: 10.174917198s)"],"step_count":1}
	{"level":"info","ts":"2024-08-16T17:20:42.418189Z","caller":"traceutil/trace.go:171","msg":"trace[462270357] range","detail":"{range_begin:/registry/services/specs/; range_end:/registry/services/specs0; }","duration":"10.181116884s","start":"2024-08-16T17:20:32.237071Z","end":"2024-08-16T17:20:42.418188Z","steps":["trace[462270357] 'agreement among raft nodes before linearized reading'  (duration: 10.18111461s)"],"step_count":1}
	{"level":"info","ts":"2024-08-16T17:20:42.418198Z","caller":"traceutil/trace.go:171","msg":"trace[1376830642] range","detail":"{range_begin:/registry/services/specs/; range_end:/registry/services/specs0; }","duration":"10.181290741s","start":"2024-08-16T17:20:32.236906Z","end":"2024-08-16T17:20:42.418196Z","steps":["trace[1376830642] 'agreement among raft nodes before linearized reading'  (duration: 10.181288308s)"],"step_count":1}
	{"level":"info","ts":"2024-08-16T17:20:42.418206Z","caller":"traceutil/trace.go:171","msg":"trace[3785906] range","detail":"{range_begin:/registry/pods/; range_end:/registry/pods0; }","duration":"10.186110345s","start":"2024-08-16T17:20:32.232094Z","end":"2024-08-16T17:20:42.418205Z","steps":["trace[3785906] 'agreement among raft nodes before linearized reading'  (duration: 10.186108125s)"],"step_count":1}
	{"level":"info","ts":"2024-08-16T17:20:42.418215Z","caller":"traceutil/trace.go:171","msg":"trace[2145161445] range","detail":"{range_begin:/registry/pods/; range_end:/registry/pods0; }","duration":"10.18613232s","start":"2024-08-16T17:20:32.232081Z","end":"2024-08-16T17:20:42.418214Z","steps":["trace[2145161445] 'agreement among raft nodes before linearized reading'  (duration: 10.186129431s)"],"step_count":1}
	{"level":"info","ts":"2024-08-16T17:20:42.418224Z","caller":"traceutil/trace.go:171","msg":"trace[321149602] range","detail":"{range_begin:/registry/minions/; range_end:/registry/minions0; }","duration":"10.192403689s","start":"2024-08-16T17:20:32.225819Z","end":"2024-08-16T17:20:42.418223Z","steps":["trace[321149602] 'agreement among raft nodes before linearized reading'  (duration: 10.19240143s)"],"step_count":1}
	{"level":"info","ts":"2024-08-16T17:20:42.439320Z","caller":"traceutil/trace.go:171","msg":"trace[2054083274] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/coredns; range_end:; response_count:1; response_revision:2678; }","duration":"3.142848746s","start":"2024-08-16T17:20:39.296467Z","end":"2024-08-16T17:20:42.439316Z","steps":["trace[2054083274] 'agreement among raft nodes before linearized reading'  (duration: 3.142737343s)"],"step_count":1}
	{"level":"info","ts":"2024-08-16T17:20:42.449245Z","caller":"traceutil/trace.go:171","msg":"trace[1397996465] range","detail":"{range_begin:/registry/rolebindings/; range_end:/registry/rolebindings0; }","duration":"10.098608124s","start":"2024-08-16T17:20:32.350632Z","end":"2024-08-16T17:20:42.449240Z","steps":["trace[1397996465] 'agreement among raft nodes before linearized reading'  (duration: 10.06711298s)"],"step_count":1}
	{"level":"info","ts":"2024-08-16T17:20:42.449460Z","caller":"traceutil/trace.go:171","msg":"trace[1832899498] range","detail":"{range_begin:/registry/roles/; range_end:/registry/roles0; }","duration":"10.104648885s","start":"2024-08-16T17:20:32.344807Z","end":"2024-08-16T17:20:42.449456Z","steps":["trace[1832899498] 'agreement among raft nodes before linearized reading'  (duration: 10.073129892s)"],"step_count":1}
	{"level":"info","ts":"2024-08-16T17:20:42.449480Z","caller":"traceutil/trace.go:171","msg":"trace[1167694364] range","detail":"{range_begin:/registry/roles/; range_end:/registry/roles0; }","duration":"10.104686355s","start":"2024-08-16T17:20:32.344791Z","end":"2024-08-16T17:20:42.449477Z","steps":["trace[1167694364] 'agreement among raft nodes before linearized reading'  (duration: 10.073161902s)"],"step_count":1}
	{"level":"info","ts":"2024-08-16T17:20:42.449487Z","caller":"traceutil/trace.go:171","msg":"trace[821386579] range","detail":"{range_begin:/registry/poddisruptionbudgets/; range_end:/registry/poddisruptionbudgets0; }","duration":"10.109732963s","start":"2024-08-16T17:20:32.339752Z","end":"2024-08-16T17:20:42.449485Z","steps":["trace[821386579] 'agreement among raft nodes before linearized reading'  (duration: 10.078208815s)"],"step_count":1}
	{"level":"info","ts":"2024-08-16T17:20:42.450467Z","caller":"traceutil/trace.go:171","msg":"trace[1774285366] range","detail":"{range_begin:/registry/poddisruptionbudgets/; range_end:/registry/poddisruptionbudgets0; }","duration":"10.110762905s","start":"2024-08-16T17:20:32.339699Z","end":"2024-08-16T17:20:42.450461Z","steps":["trace[1774285366] 'agreement among raft nodes before linearized reading'  (duration: 10.078269544s)"],"step_count":1}
	{"level":"info","ts":"2024-08-16T17:20:42.451033Z","caller":"traceutil/trace.go:171","msg":"trace[1297175374] range","detail":"{range_begin:/registry/ingress/; range_end:/registry/ingress0; }","duration":"10.127411801s","start":"2024-08-16T17:20:32.323618Z","end":"2024-08-16T17:20:42.451030Z","steps":["trace[1297175374] 'agreement among raft nodes before linearized reading'  (duration: 10.094391389s)"],"step_count":1}
	{"level":"info","ts":"2024-08-16T17:20:42.451081Z","caller":"traceutil/trace.go:171","msg":"trace[1271056920] range","detail":"{range_begin:/registry/networkpolicies/; range_end:/registry/networkpolicies0; }","duration":"10.131227737s","start":"2024-08-16T17:20:32.319851Z","end":"2024-08-16T17:20:42.451079Z","steps":["trace[1271056920] 'agreement among raft nodes before linearized reading'  (duration: 10.098166004s)"],"step_count":1}
	{"level":"info","ts":"2024-08-16T17:20:42.451306Z","caller":"traceutil/trace.go:171","msg":"trace[21102931] range","detail":"{range_begin:/registry/networkpolicies/; range_end:/registry/networkpolicies0; }","duration":"10.131459536s","start":"2024-08-16T17:20:32.319843Z","end":"2024-08-16T17:20:42.451303Z","steps":["trace[21102931] 'agreement among raft nodes before linearized reading'  (duration: 10.098180027s)"],"step_count":1}
	{"level":"info","ts":"2024-08-16T17:20:42.451605Z","caller":"traceutil/trace.go:171","msg":"trace[1874924595] range","detail":"{range_begin:/registry/endpointslices/; range_end:/registry/endpointslices0; }","duration":"10.139057064s","start":"2024-08-16T17:20:32.312542Z","end":"2024-08-16T17:20:42.451599Z","steps":["trace[1874924595] 'agreement among raft nodes before linearized reading'  (duration: 10.105488647s)"],"step_count":1}
	{"level":"info","ts":"2024-08-16T17:20:42.451664Z","caller":"traceutil/trace.go:171","msg":"trace[1798827977] range","detail":"{range_begin:/registry/endpointslices/; range_end:/registry/endpointslices0; }","duration":"10.13913383s","start":"2024-08-16T17:20:32.312528Z","end":"2024-08-16T17:20:42.451662Z","steps":["trace[1798827977] 'agreement among raft nodes before linearized reading'  (duration: 10.105508758s)"],"step_count":1}
	{"level":"info","ts":"2024-08-16T17:20:42.452011Z","caller":"traceutil/trace.go:171","msg":"trace[2091293936] range","detail":"{range_begin:/registry/leases/; range_end:/registry/leases0; }","duration":"10.145194257s","start":"2024-08-16T17:20:32.306813Z","end":"2024-08-16T17:20:42.452007Z","steps":["trace[2091293936] 'agreement among raft nodes before linearized reading'  (duration: 10.111230197s)"],"step_count":1}
	{"level":"info","ts":"2024-08-16T17:20:42.452054Z","caller":"traceutil/trace.go:171","msg":"trace[325445793] range","detail":"{range_begin:/registry/leases/; range_end:/registry/leases0; }","duration":"10.145252288s","start":"2024-08-16T17:20:32.306800Z","end":"2024-08-16T17:20:42.452052Z","steps":["trace[325445793] 'agreement among raft nodes before linearized reading'  (duration: 10.111249974s)"],"step_count":1}
	{"level":"info","ts":"2024-08-16T17:20:42.450486Z","caller":"traceutil/trace.go:171","msg":"trace[938801264] range","detail":"{range_begin:/registry/runtimeclasses/; range_end:/registry/runtimeclasses0; }","duration":"10.115919082s","start":"2024-08-16T17:20:32.334565Z","end":"2024-08-16T17:20:42.450484Z","steps":["trace[938801264] 'agreement among raft nodes before linearized reading'  (duration: 10.083409766s)"],"step_count":1}
	{"level":"info","ts":"2024-08-16T17:20:42.450494Z","caller":"traceutil/trace.go:171","msg":"trace[639392557] range","detail":"{range_begin:/registry/runtimeclasses/; range_end:/registry/runtimeclasses0; }","duration":"10.115941731s","start":"2024-08-16T17:20:32.334550Z","end":"2024-08-16T17:20:42.450491Z","steps":["trace[639392557] 'agreement among raft nodes before linearized reading'  (duration: 10.083431767s)"],"step_count":1}
	{"level":"info","ts":"2024-08-16T17:20:42.450500Z","caller":"traceutil/trace.go:171","msg":"trace[253070931] range","detail":"{range_begin:/registry/ingressclasses/; range_end:/registry/ingressclasses0; }","duration":"10.122414128s","start":"2024-08-16T17:20:32.328084Z","end":"2024-08-16T17:20:42.450499Z","steps":["trace[253070931] 'agreement among raft nodes before linearized reading'  (duration: 10.089904508s)"],"step_count":1}
	{"level":"info","ts":"2024-08-16T17:20:42.450508Z","caller":"traceutil/trace.go:171","msg":"trace[852528616] range","detail":"{range_begin:/registry/ingressclasses/; range_end:/registry/ingressclasses0; }","duration":"10.122548591s","start":"2024-08-16T17:20:32.327957Z","end":"2024-08-16T17:20:42.450505Z","steps":["trace[852528616] 'agreement among raft nodes before linearized reading'  (duration: 10.090038708s)"],"step_count":1}
	{"level":"info","ts":"2024-08-16T17:20:42.450644Z","caller":"traceutil/trace.go:171","msg":"trace[1809627769] range","detail":"{range_begin:/registry/ingress/; range_end:/registry/ingress0; }","duration":"10.127004117s","start":"2024-08-16T17:20:32.323635Z","end":"2024-08-16T17:20:42.450639Z","steps":["trace[1809627769] 'agreement among raft nodes before linearized reading'  (duration: 10.094367689s)"],"step_count":1}
	
	
	==> kernel <==
	 17:22:56 up 21 min,  0 users,  load average: 0.30, 0.60, 0.38
	Linux ha-286000 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [2677394dcd66] <==
	I0816 17:22:15.225310       1 main.go:322] Node ha-286000-m04 has CIDR [10.244.2.0/24] 
	I0816 17:22:25.225371       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0816 17:22:25.225484       1 main.go:299] handling current node
	I0816 17:22:25.225667       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0816 17:22:25.225700       1 main.go:322] Node ha-286000-m02 has CIDR [10.244.1.0/24] 
	I0816 17:22:25.225935       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0816 17:22:25.225943       1 main.go:322] Node ha-286000-m04 has CIDR [10.244.2.0/24] 
	I0816 17:22:35.223144       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0816 17:22:35.223426       1 main.go:299] handling current node
	I0816 17:22:35.223985       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0816 17:22:35.224277       1 main.go:322] Node ha-286000-m02 has CIDR [10.244.1.0/24] 
	I0816 17:22:35.224778       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0816 17:22:35.224951       1 main.go:322] Node ha-286000-m04 has CIDR [10.244.2.0/24] 
	I0816 17:22:45.231619       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0816 17:22:45.231806       1 main.go:299] handling current node
	I0816 17:22:45.231910       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0816 17:22:45.231994       1 main.go:322] Node ha-286000-m02 has CIDR [10.244.1.0/24] 
	I0816 17:22:45.232158       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0816 17:22:45.232263       1 main.go:322] Node ha-286000-m04 has CIDR [10.244.2.0/24] 
	I0816 17:22:55.225733       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0816 17:22:55.225894       1 main.go:299] handling current node
	I0816 17:22:55.225954       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0816 17:22:55.226004       1 main.go:322] Node ha-286000-m02 has CIDR [10.244.1.0/24] 
	I0816 17:22:55.226143       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0816 17:22:55.226223       1 main.go:322] Node ha-286000-m04 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [2e6e9443b194] <==
	E0816 17:20:42.953007       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: storage is (re)initializing" logger="UnhandledError"
	W0816 17:20:42.953139       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.Lease: storage is (re)initializing
	E0816 17:20:42.953177       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.Lease: failed to list *v1.Lease: storage is (re)initializing" logger="UnhandledError"
	W0816 17:20:43.690335       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.6]
	I0816 17:20:43.878908       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0816 17:20:43.878946       1 aggregator.go:171] initial CRD sync complete...
	I0816 17:20:43.878954       1 autoregister_controller.go:144] Starting autoregister controller
	I0816 17:20:43.878958       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0816 17:20:43.910414       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0816 17:20:43.910449       1 policy_source.go:224] refreshing policies
	I0816 17:20:43.959877       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0816 17:20:43.971844       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0816 17:20:43.974055       1 shared_informer.go:320] Caches are synced for configmaps
	I0816 17:20:43.992041       1 controller.go:615] quota admission added evaluator for: endpoints
	I0816 17:20:44.000235       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0816 17:20:44.002838       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0816 17:20:44.072233       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0816 17:20:44.073235       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0816 17:20:44.076196       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0816 17:20:44.076224       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0816 17:20:44.077503       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0816 17:20:44.078998       1 cache.go:39] Caches are synced for autoregister controller
	I0816 17:20:44.079177       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0816 17:20:44.106164       1 shared_informer.go:320] Caches are synced for node_authorizer
	F0816 17:21:32.873392       1 hooks.go:210] PostStartHook "start-service-ip-repair-controllers" failed: unable to perform initial IP and Port allocation check
	
	
	==> kube-apiserver [63b366c951f2] <==
	I0816 17:21:56.086883       1 establishing_controller.go:81] Starting EstablishingController
	I0816 17:21:56.086901       1 nonstructuralschema_controller.go:195] Starting NonStructuralSchemaConditionController
	I0816 17:21:56.086908       1 apiapproval_controller.go:189] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0816 17:21:56.086916       1 crd_finalizer.go:269] Starting CRDFinalizer
	I0816 17:21:56.178815       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0816 17:21:56.180675       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0816 17:21:56.180964       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0816 17:21:56.180822       1 shared_informer.go:320] Caches are synced for configmaps
	I0816 17:21:56.181867       1 aggregator.go:171] initial CRD sync complete...
	I0816 17:21:56.181907       1 autoregister_controller.go:144] Starting autoregister controller
	I0816 17:21:56.181914       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0816 17:21:56.181919       1 cache.go:39] Caches are synced for autoregister controller
	I0816 17:21:56.180835       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0816 17:21:56.180849       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0816 17:21:56.182163       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0816 17:21:56.180862       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0816 17:21:56.186487       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0816 17:21:56.211370       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0816 17:21:56.220119       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0816 17:21:56.220138       1 policy_source.go:224] refreshing policies
	I0816 17:21:56.251158       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0816 17:21:57.085413       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0816 17:21:57.291379       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.5 192.169.0.6]
	I0816 17:21:57.292405       1 controller.go:615] quota admission added evaluator for: endpoints
	I0816 17:21:57.296335       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [8f5867ee99d9] <==
	I0816 17:22:26.157552       1 controller_utils.go:151] "Failed to update status for pod" logger="node-lifecycle-controller" pod="kube-system/kube-controller-manager-ha-286000" err="Operation cannot be fulfilled on pods \"kube-controller-manager-ha-286000\": the object has been modified; please apply your changes to the latest version and try again"
	I0816 17:22:26.162542       1 controller_utils.go:151] "Failed to update status for pod" logger="node-lifecycle-controller" pod="kube-system/kube-scheduler-ha-286000" err="Operation cannot be fulfilled on pods \"kube-scheduler-ha-286000\": the object has been modified; please apply your changes to the latest version and try again"
	I0816 17:22:26.167760       1 controller_utils.go:151] "Failed to update status for pod" logger="node-lifecycle-controller" pod="kube-system/storage-provisioner" err="Operation cannot be fulfilled on pods \"storage-provisioner\": the object has been modified; please apply your changes to the latest version and try again"
	I0816 17:22:26.176196       1 controller_utils.go:151] "Failed to update status for pod" logger="node-lifecycle-controller" pod="kube-system/coredns-6f6b679f8f-2kqjf" err="Operation cannot be fulfilled on pods \"coredns-6f6b679f8f-2kqjf\": the object has been modified; please apply your changes to the latest version and try again"
	I0816 17:22:26.194987       1 controller_utils.go:151] "Failed to update status for pod" logger="node-lifecycle-controller" pod="kube-system/etcd-ha-286000" err="Operation cannot be fulfilled on pods \"etcd-ha-286000\": the object has been modified; please apply your changes to the latest version and try again"
	I0816 17:22:26.210075       1 controller_utils.go:151] "Failed to update status for pod" logger="node-lifecycle-controller" pod="kube-system/kube-vip-ha-286000" err="Operation cannot be fulfilled on pods \"kube-vip-ha-286000\": the object has been modified; please apply your changes to the latest version and try again"
	I0816 17:22:26.214618       1 controller_utils.go:151] "Failed to update status for pod" logger="node-lifecycle-controller" pod="kube-system/kindnet-whqxb" err="Operation cannot be fulfilled on pods \"kindnet-whqxb\": the object has been modified; please apply your changes to the latest version and try again"
	I0816 17:22:26.218929       1 controller_utils.go:151] "Failed to update status for pod" logger="node-lifecycle-controller" pod="kube-system/kube-proxy-w4nt2" err="Operation cannot be fulfilled on pods \"kube-proxy-w4nt2\": the object has been modified; please apply your changes to the latest version and try again"
	I0816 17:22:26.225015       1 controller_utils.go:151] "Failed to update status for pod" logger="node-lifecycle-controller" pod="kube-system/coredns-6f6b679f8f-rfbz7" err="Operation cannot be fulfilled on pods \"coredns-6f6b679f8f-rfbz7\": the object has been modified; please apply your changes to the latest version and try again"
	I0816 17:22:26.229402       1 controller_utils.go:151] "Failed to update status for pod" logger="node-lifecycle-controller" pod="default/busybox-7dff88458-dvmvk" err="Operation cannot be fulfilled on pods \"busybox-7dff88458-dvmvk\": the object has been modified; please apply your changes to the latest version and try again"
	E0816 17:22:26.229463       1 node_lifecycle_controller.go:758] "Unhandled Error" err="unable to mark all pods NotReady on node ha-286000: [Operation cannot be fulfilled on pods \"kube-controller-manager-ha-286000\": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on pods \"kube-scheduler-ha-286000\": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on pods \"storage-provisioner\": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on pods \"coredns-6f6b679f8f-2kqjf\": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on pods \"etcd-ha-286000\": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on pods \"kube-vip-ha-286000\": the object has
been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on pods \"kindnet-whqxb\": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on pods \"kube-proxy-w4nt2\": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on pods \"coredns-6f6b679f8f-rfbz7\": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on pods \"busybox-7dff88458-dvmvk\": the object has been modified; please apply your changes to the latest version and try again]; queuing for retry" logger="UnhandledError"
	E0816 17:22:31.236481       1 node_lifecycle_controller.go:978] "Error updating node" err="Operation cannot be fulfilled on nodes \"ha-286000-m04\": the object has been modified; please apply your changes to the latest version and try again" logger="node-lifecycle-controller" node="ha-286000-m04"
	E0816 17:22:31.237493       1 node_lifecycle_controller.go:978] "Error updating node" err="Operation cannot be fulfilled on nodes \"ha-286000\": the object has been modified; please apply your changes to the latest version and try again" logger="node-lifecycle-controller" node="ha-286000"
	I0816 17:22:31.264874       1 controller_utils.go:151] "Failed to update status for pod" logger="node-lifecycle-controller" pod="kube-system/etcd-ha-286000" err="Operation cannot be fulfilled on pods \"etcd-ha-286000\": the object has been modified; please apply your changes to the latest version and try again"
	I0816 17:22:31.271995       1 controller_utils.go:151] "Failed to update status for pod" logger="node-lifecycle-controller" pod="kube-system/kube-vip-ha-286000" err="Operation cannot be fulfilled on pods \"kube-vip-ha-286000\": the object has been modified; please apply your changes to the latest version and try again"
	I0816 17:22:31.279534       1 controller_utils.go:151] "Failed to update status for pod" logger="node-lifecycle-controller" pod="kube-system/kindnet-whqxb" err="Operation cannot be fulfilled on pods \"kindnet-whqxb\": the object has been modified; please apply your changes to the latest version and try again"
	I0816 17:22:31.284644       1 controller_utils.go:151] "Failed to update status for pod" logger="node-lifecycle-controller" pod="kube-system/kube-proxy-w4nt2" err="Operation cannot be fulfilled on pods \"kube-proxy-w4nt2\": the object has been modified; please apply your changes to the latest version and try again"
	I0816 17:22:31.289293       1 controller_utils.go:151] "Failed to update status for pod" logger="node-lifecycle-controller" pod="kube-system/coredns-6f6b679f8f-rfbz7" err="Operation cannot be fulfilled on pods \"coredns-6f6b679f8f-rfbz7\": the object has been modified; please apply your changes to the latest version and try again"
	I0816 17:22:31.293853       1 controller_utils.go:151] "Failed to update status for pod" logger="node-lifecycle-controller" pod="default/busybox-7dff88458-dvmvk" err="Operation cannot be fulfilled on pods \"busybox-7dff88458-dvmvk\": the object has been modified; please apply your changes to the latest version and try again"
	I0816 17:22:31.299534       1 controller_utils.go:151] "Failed to update status for pod" logger="node-lifecycle-controller" pod="kube-system/kube-controller-manager-ha-286000" err="Operation cannot be fulfilled on pods \"kube-controller-manager-ha-286000\": the object has been modified; please apply your changes to the latest version and try again"
	I0816 17:22:31.303318       1 controller_utils.go:151] "Failed to update status for pod" logger="node-lifecycle-controller" pod="kube-system/kube-scheduler-ha-286000" err="Operation cannot be fulfilled on pods \"kube-scheduler-ha-286000\": the object has been modified; please apply your changes to the latest version and try again"
	I0816 17:22:31.307950       1 controller_utils.go:151] "Failed to update status for pod" logger="node-lifecycle-controller" pod="kube-system/storage-provisioner" err="Operation cannot be fulfilled on pods \"storage-provisioner\": the object has been modified; please apply your changes to the latest version and try again"
	I0816 17:22:31.314052       1 controller_utils.go:151] "Failed to update status for pod" logger="node-lifecycle-controller" pod="kube-system/coredns-6f6b679f8f-2kqjf" err="Operation cannot be fulfilled on pods \"coredns-6f6b679f8f-2kqjf\": the object has been modified; please apply your changes to the latest version and try again"
	E0816 17:22:31.314118       1 node_lifecycle_controller.go:758] "Unhandled Error" err="unable to mark all pods NotReady on node ha-286000: [Operation cannot be fulfilled on pods \"etcd-ha-286000\": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on pods \"kube-vip-ha-286000\": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on pods \"kindnet-whqxb\": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on pods \"kube-proxy-w4nt2\": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on pods \"coredns-6f6b679f8f-rfbz7\": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on pods \"busybox-7dff88458-dvmvk\": the object has been modified; please a
pply your changes to the latest version and try again, Operation cannot be fulfilled on pods \"kube-controller-manager-ha-286000\": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on pods \"kube-scheduler-ha-286000\": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on pods \"storage-provisioner\": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on pods \"coredns-6f6b679f8f-2kqjf\": the object has been modified; please apply your changes to the latest version and try again]; queuing for retry" logger="UnhandledError"
	E0816 17:22:36.321709       1 node_lifecycle_controller.go:978] "Error updating node" err="Operation cannot be fulfilled on nodes \"ha-286000\": the object has been modified; please apply your changes to the latest version and try again" logger="node-lifecycle-controller" node="ha-286000"
	
	
	==> kube-proxy [81f6c96d4649] <==
	E0816 17:18:57.696982       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2592\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 17:19:00.770881       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-286000&resourceVersion=2600": dial tcp 192.169.0.254:8443: connect: no route to host
	E0816 17:19:00.770973       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-286000&resourceVersion=2600\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 17:19:00.771455       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2678": dial tcp 192.169.0.254:8443: connect: no route to host
	E0816 17:19:00.771540       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2678\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 17:19:03.838026       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2592": dial tcp 192.169.0.254:8443: connect: no route to host
	E0816 17:19:03.838287       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2592\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 17:19:09.980567       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-286000&resourceVersion=2600": dial tcp 192.169.0.254:8443: connect: no route to host
	E0816 17:19:09.980625       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-286000&resourceVersion=2600\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 17:19:13.053000       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2592": dial tcp 192.169.0.254:8443: connect: no route to host
	E0816 17:19:13.053145       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2592\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 17:19:16.125305       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2678": dial tcp 192.169.0.254:8443: connect: no route to host
	E0816 17:19:16.125738       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2678\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 17:19:28.413017       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-286000&resourceVersion=2600": dial tcp 192.169.0.254:8443: connect: no route to host
	E0816 17:19:28.413242       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-286000&resourceVersion=2600\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 17:19:37.633251       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2592": dial tcp 192.169.0.254:8443: connect: no route to host
	E0816 17:19:37.633353       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2592\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 17:19:37.633417       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2678": dial tcp 192.169.0.254:8443: connect: no route to host
	E0816 17:19:37.633437       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2678\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 17:19:56.059814       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-286000&resourceVersion=2600": dial tcp 192.169.0.254:8443: connect: no route to host
	E0816 17:19:56.059845       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-286000&resourceVersion=2600\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 17:20:17.564736       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2592": dial tcp 192.169.0.254:8443: connect: no route to host
	E0816 17:20:17.564831       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2592\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 17:20:17.565065       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2678": dial tcp 192.169.0.254:8443: connect: no route to host
	E0816 17:20:17.565112       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2678\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-scheduler [f7b2e9efdd94] <==
	E0816 17:20:19.124937       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 17:20:20.013337       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0816 17:20:20.013508       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0816 17:20:22.503962       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0816 17:20:22.504039       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 17:20:23.117539       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0816 17:20:23.117759       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0816 17:20:24.619908       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0816 17:20:24.620160       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 17:20:32.932878       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0816 17:20:32.932925       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 17:20:34.100467       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0816 17:20:34.100511       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 17:20:36.209664       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0816 17:20:36.209784       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 17:20:36.615553       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0816 17:20:36.615615       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 17:20:37.131529       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0816 17:20:37.131621       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 17:20:37.319247       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 17:20:37.319312       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 17:20:39.232294       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0816 17:20:39.232326       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0816 17:21:33.466903       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.169.0.5:8443/apis/policy/v1/poddisruptionbudgets?resourceVersion=2660": dial tcp 192.169.0.5:8443: connect: connection refused
	E0816 17:21:33.467202       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.169.0.5:8443/apis/policy/v1/poddisruptionbudgets?resourceVersion=2660\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	
	
	==> kubelet <==
	Aug 16 17:20:57 ha-286000 kubelet[2114]: I0816 17:20:57.791961    2114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-7dff88458-dvmvk" podStartSLOduration=954.615036438 podStartE2EDuration="15m55.79194846s" podCreationTimestamp="2024-08-16 17:05:02 +0000 UTC" firstStartedPulling="2024-08-16 17:05:03.303506622 +0000 UTC m=+157.806949606" lastFinishedPulling="2024-08-16 17:05:04.480418642 +0000 UTC m=+158.983861628" observedRunningTime="2024-08-16 17:05:04.785287257 +0000 UTC m=+159.288730246" watchObservedRunningTime="2024-08-16 17:20:57.79194846 +0000 UTC m=+1112.295391448"
	Aug 16 17:21:25 ha-286000 kubelet[2114]: E0816 17:21:25.670735    2114 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 16 17:21:25 ha-286000 kubelet[2114]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 16 17:21:25 ha-286000 kubelet[2114]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 16 17:21:25 ha-286000 kubelet[2114]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 16 17:21:25 ha-286000 kubelet[2114]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 16 17:21:33 ha-286000 kubelet[2114]: I0816 17:21:33.179915    2114 scope.go:117] "RemoveContainer" containerID="8d7a6d0f95379fb7bf40d7e00f814e7c0b51d525a2fc0c90e3b972ee3aae0551"
	Aug 16 17:21:33 ha-286000 kubelet[2114]: I0816 17:21:33.180109    2114 scope.go:117] "RemoveContainer" containerID="2e6e9443b194b68b939fd043c6c026e742e6ca27acad8e28dcefec7d1c7e431d"
	Aug 16 17:21:33 ha-286000 kubelet[2114]: E0816 17:21:33.180189    2114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-ha-286000_kube-system(54fd9c91db8add4ea97d383d73f94dbe)\"" pod="kube-system/kube-apiserver-ha-286000" podUID="54fd9c91db8add4ea97d383d73f94dbe"
	Aug 16 17:21:33 ha-286000 kubelet[2114]: I0816 17:21:33.180979    2114 status_manager.go:851] "Failed to get status for pod" podUID="54fd9c91db8add4ea97d383d73f94dbe" pod="kube-system/kube-apiserver-ha-286000" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-286000\": dial tcp 192.169.0.254:8443: connect: connection refused"
	Aug 16 17:21:35 ha-286000 kubelet[2114]: I0816 17:21:35.631998    2114 status_manager.go:851] "Failed to get status for pod" podUID="54fd9c91db8add4ea97d383d73f94dbe" pod="kube-system/kube-apiserver-ha-286000" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-286000\": dial tcp 192.169.0.254:8443: connect: connection refused"
	Aug 16 17:21:36 ha-286000 kubelet[2114]: I0816 17:21:36.209143    2114 scope.go:117] "RemoveContainer" containerID="cafa34c562392ad0f4839d505d8a5b0e77e1dad3770e1f2c6e5f587dacbaa856"
	Aug 16 17:21:36 ha-286000 kubelet[2114]: I0816 17:21:36.209404    2114 scope.go:117] "RemoveContainer" containerID="a5da1871a366dba9a7d23ca85788081f2c2e6a83218ffa546097678f44554b82"
	Aug 16 17:21:36 ha-286000 kubelet[2114]: E0816 17:21:36.209516    2114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-vip pod=kube-vip-ha-286000_kube-system(9dfa3b06b26298e967397c0cc0146f44)\"" pod="kube-system/kube-vip-ha-286000" podUID="9dfa3b06b26298e967397c0cc0146f44"
	Aug 16 17:21:38 ha-286000 kubelet[2114]: I0816 17:21:38.274828    2114 scope.go:117] "RemoveContainer" containerID="2e6e9443b194b68b939fd043c6c026e742e6ca27acad8e28dcefec7d1c7e431d"
	Aug 16 17:21:38 ha-286000 kubelet[2114]: E0816 17:21:38.275246    2114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-ha-286000_kube-system(54fd9c91db8add4ea97d383d73f94dbe)\"" pod="kube-system/kube-apiserver-ha-286000" podUID="54fd9c91db8add4ea97d383d73f94dbe"
	Aug 16 17:21:39 ha-286000 kubelet[2114]: I0816 17:21:39.235651    2114 scope.go:117] "RemoveContainer" containerID="2e6e9443b194b68b939fd043c6c026e742e6ca27acad8e28dcefec7d1c7e431d"
	Aug 16 17:21:39 ha-286000 kubelet[2114]: E0816 17:21:39.235763    2114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-ha-286000_kube-system(54fd9c91db8add4ea97d383d73f94dbe)\"" pod="kube-system/kube-apiserver-ha-286000" podUID="54fd9c91db8add4ea97d383d73f94dbe"
	Aug 16 17:21:50 ha-286000 kubelet[2114]: I0816 17:21:50.632003    2114 scope.go:117] "RemoveContainer" containerID="a5da1871a366dba9a7d23ca85788081f2c2e6a83218ffa546097678f44554b82"
	Aug 16 17:21:54 ha-286000 kubelet[2114]: I0816 17:21:54.631265    2114 scope.go:117] "RemoveContainer" containerID="2e6e9443b194b68b939fd043c6c026e742e6ca27acad8e28dcefec7d1c7e431d"
	Aug 16 17:22:25 ha-286000 kubelet[2114]: E0816 17:22:25.669452    2114 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 16 17:22:25 ha-286000 kubelet[2114]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 16 17:22:25 ha-286000 kubelet[2114]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 16 17:22:25 ha-286000 kubelet[2114]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 16 17:22:25 ha-286000 kubelet[2114]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-286000 -n ha-286000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-286000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (163.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (373.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-amd64 node list -p ha-286000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-amd64 stop -p ha-286000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-darwin-amd64 stop -p ha-286000 -v=7 --alsologtostderr: (33.093512398s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-286000 --wait=true -v=7 --alsologtostderr
E0816 10:25:35.636518    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/functional-373000/client.crt: no such file or directory" logger="UnhandledError"
E0816 10:26:32.723068    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/addons-725000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ha-286000 --wait=true -v=7 --alsologtostderr: exit status 80 (5m35.260661487s)

                                                
                                                
-- stdout --
	* [ha-286000] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19461
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1276/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1276/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting "ha-286000" primary control-plane node in "ha-286000" cluster
	* Restarting existing hyperkit VM for "ha-286000" ...
	* Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	* Enabled addons: 
	
	* Starting "ha-286000-m02" control-plane node in "ha-286000" cluster
	* Restarting existing hyperkit VM for "ha-286000-m02" ...
	* Found network options:
	  - NO_PROXY=192.169.0.5
	* Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	  - env NO_PROXY=192.169.0.5
	* Verifying Kubernetes components...
	
	* Starting "ha-286000-m03" control-plane node in "ha-286000" cluster
	* Restarting existing hyperkit VM for "ha-286000-m03" ...
	* Found network options:
	  - NO_PROXY=192.169.0.5,192.169.0.6
	* Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	  - env NO_PROXY=192.169.0.5
	  - env NO_PROXY=192.169.0.5,192.169.0.6
	* Verifying Kubernetes components...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 10:23:31.430615    4656 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:23:31.431053    4656 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:23:31.431060    4656 out.go:358] Setting ErrFile to fd 2...
	I0816 10:23:31.431065    4656 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:23:31.431301    4656 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1276/.minikube/bin
	I0816 10:23:31.432961    4656 out.go:352] Setting JSON to false
	I0816 10:23:31.457337    4656 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":3181,"bootTime":1723825830,"procs":437,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0816 10:23:31.457435    4656 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 10:23:31.479716    4656 out.go:177] * [ha-286000] minikube v1.33.1 on Darwin 14.6.1
	I0816 10:23:31.522521    4656 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 10:23:31.522577    4656 notify.go:220] Checking for updates...
	I0816 10:23:31.567096    4656 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1276/kubeconfig
	I0816 10:23:31.588384    4656 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0816 10:23:31.609442    4656 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 10:23:31.630204    4656 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1276/.minikube
	I0816 10:23:31.651227    4656 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 10:23:31.673167    4656 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:23:31.673335    4656 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 10:23:31.674026    4656 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:23:31.674118    4656 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:23:31.683709    4656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52161
	I0816 10:23:31.684063    4656 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:23:31.684452    4656 main.go:141] libmachine: Using API Version  1
	I0816 10:23:31.684463    4656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:23:31.684744    4656 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:23:31.684873    4656 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:23:31.714156    4656 out.go:177] * Using the hyperkit driver based on existing profile
	I0816 10:23:31.756393    4656 start.go:297] selected driver: hyperkit
	I0816 10:23:31.756421    4656 start.go:901] validating driver "hyperkit" against &{Name:ha-286000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:ha-286000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:fals
e efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 10:23:31.756672    4656 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 10:23:31.756879    4656 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:23:31.757097    4656 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19461-1276/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0816 10:23:31.766849    4656 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0816 10:23:31.772699    4656 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:23:31.772722    4656 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0816 10:23:31.776315    4656 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 10:23:31.776385    4656 cni.go:84] Creating CNI manager for ""
	I0816 10:23:31.776395    4656 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0816 10:23:31.776475    4656 start.go:340] cluster config:
	{Name:ha-286000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-286000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-t
iller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 10:23:31.776573    4656 iso.go:125] acquiring lock: {Name:mkb430b43b5e7a8a683454482519837ed996c78d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:23:31.798308    4656 out.go:177] * Starting "ha-286000" primary control-plane node in "ha-286000" cluster
	I0816 10:23:31.820262    4656 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 10:23:31.820333    4656 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0816 10:23:31.820361    4656 cache.go:56] Caching tarball of preloaded images
	I0816 10:23:31.820552    4656 preload.go:172] Found /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0816 10:23:31.820569    4656 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 10:23:31.820757    4656 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:23:31.821672    4656 start.go:360] acquireMachinesLock for ha-286000: {Name:mk0510f0432b1e24fd07d899155e85dff8756d5a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:23:31.821789    4656 start.go:364] duration metric: took 93.411µs to acquireMachinesLock for "ha-286000"
	I0816 10:23:31.821826    4656 start.go:96] Skipping create...Using existing machine configuration
	I0816 10:23:31.821843    4656 fix.go:54] fixHost starting: 
	I0816 10:23:31.822296    4656 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:23:31.822326    4656 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:23:31.831598    4656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52163
	I0816 10:23:31.831979    4656 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:23:31.832360    4656 main.go:141] libmachine: Using API Version  1
	I0816 10:23:31.832373    4656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:23:31.832622    4656 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:23:31.832766    4656 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:23:31.832876    4656 main.go:141] libmachine: (ha-286000) Calling .GetState
	I0816 10:23:31.832983    4656 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:23:31.833087    4656 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:23:31.834009    4656 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid 3771 missing from process table
	I0816 10:23:31.834044    4656 fix.go:112] recreateIfNeeded on ha-286000: state=Stopped err=<nil>
	I0816 10:23:31.834061    4656 main.go:141] libmachine: (ha-286000) Calling .DriverName
	W0816 10:23:31.834156    4656 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 10:23:31.892140    4656 out.go:177] * Restarting existing hyperkit VM for "ha-286000" ...
	I0816 10:23:31.931475    4656 main.go:141] libmachine: (ha-286000) Calling .Start
	I0816 10:23:31.931796    4656 main.go:141] libmachine: (ha-286000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/hyperkit.pid
	I0816 10:23:31.931814    4656 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:23:31.933360    4656 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid 3771 missing from process table
	I0816 10:23:31.933379    4656 main.go:141] libmachine: (ha-286000) DBG | pid 3771 is in state "Stopped"
	I0816 10:23:31.933400    4656 main.go:141] libmachine: (ha-286000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/hyperkit.pid...
	I0816 10:23:31.934010    4656 main.go:141] libmachine: (ha-286000) DBG | Using UUID ad96de67-e238-408c-89eb-d74e5b68d297
	I0816 10:23:32.043909    4656 main.go:141] libmachine: (ha-286000) DBG | Generated MAC 66:c8:48:4e:12:1b
	I0816 10:23:32.043928    4656 main.go:141] libmachine: (ha-286000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000
	I0816 10:23:32.044052    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:32 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"ad96de67-e238-408c-89eb-d74e5b68d297", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003a89c0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 10:23:32.044084    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:32 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"ad96de67-e238-408c-89eb-d74e5b68d297", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003a89c0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 10:23:32.044134    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:32 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "ad96de67-e238-408c-89eb-d74e5b68d297", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/ha-286000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/bzimage,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000"}
	I0816 10:23:32.044180    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:32 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U ad96de67-e238-408c-89eb-d74e5b68d297 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/ha-286000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/console-ring -f kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/bzimage,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000"
	I0816 10:23:32.044192    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:32 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0816 10:23:32.045646    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:32 DEBUG: hyperkit: Pid is 4669
	I0816 10:23:32.046030    4656 main.go:141] libmachine: (ha-286000) DBG | Attempt 0
	I0816 10:23:32.046046    4656 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:23:32.046146    4656 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 4669
	I0816 10:23:32.048140    4656 main.go:141] libmachine: (ha-286000) DBG | Searching for 66:c8:48:4e:12:1b in /var/db/dhcpd_leases ...
	I0816 10:23:32.048193    4656 main.go:141] libmachine: (ha-286000) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0816 10:23:32.048231    4656 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dbd7}
	I0816 10:23:32.048249    4656 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:23:32.048272    4656 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66c0d7f9}
	I0816 10:23:32.048286    4656 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:23:32.048293    4656 main.go:141] libmachine: (ha-286000) DBG | Found match: 66:c8:48:4e:12:1b
	I0816 10:23:32.048301    4656 main.go:141] libmachine: (ha-286000) DBG | IP: 192.169.0.5
	I0816 10:23:32.048382    4656 main.go:141] libmachine: (ha-286000) Calling .GetConfigRaw
	I0816 10:23:32.049597    4656 main.go:141] libmachine: (ha-286000) Calling .GetIP
	I0816 10:23:32.049816    4656 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:23:32.050246    4656 machine.go:93] provisionDockerMachine start ...
	I0816 10:23:32.050258    4656 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:23:32.050395    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:23:32.050512    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:23:32.050602    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:23:32.050694    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:23:32.050788    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:23:32.050933    4656 main.go:141] libmachine: Using SSH client type: native
	I0816 10:23:32.051148    4656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e87ea0] 0x3e8ac00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:23:32.051157    4656 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 10:23:32.053822    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:32 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0816 10:23:32.105618    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:32 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0816 10:23:32.106644    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:23:32.106664    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:23:32.106672    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:23:32.106681    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:23:32.488273    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:32 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0816 10:23:32.488286    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:32 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0816 10:23:32.602925    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:23:32.602945    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:23:32.602968    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:23:32.603003    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:23:32.603842    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:32 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0816 10:23:32.603853    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:32 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0816 10:23:38.196809    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:38 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0816 10:23:38.196887    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:38 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0816 10:23:38.196898    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:38 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0816 10:23:38.223115    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:38 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0816 10:23:43.125906    4656 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 10:23:43.125920    4656 main.go:141] libmachine: (ha-286000) Calling .GetMachineName
	I0816 10:23:43.126080    4656 buildroot.go:166] provisioning hostname "ha-286000"
	I0816 10:23:43.126090    4656 main.go:141] libmachine: (ha-286000) Calling .GetMachineName
	I0816 10:23:43.126193    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:23:43.126289    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:23:43.126427    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:23:43.126532    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:23:43.126633    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:23:43.126763    4656 main.go:141] libmachine: Using SSH client type: native
	I0816 10:23:43.126897    4656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e87ea0] 0x3e8ac00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:23:43.126905    4656 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-286000 && echo "ha-286000" | sudo tee /etc/hostname
	I0816 10:23:43.200672    4656 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-286000
	
	I0816 10:23:43.200691    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:23:43.200824    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:23:43.200934    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:23:43.201035    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:23:43.201146    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:23:43.201266    4656 main.go:141] libmachine: Using SSH client type: native
	I0816 10:23:43.201423    4656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e87ea0] 0x3e8ac00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:23:43.201434    4656 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-286000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-286000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-286000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 10:23:43.272382    4656 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 10:23:43.272403    4656 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19461-1276/.minikube CaCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19461-1276/.minikube}
	I0816 10:23:43.272418    4656 buildroot.go:174] setting up certificates
	I0816 10:23:43.272432    4656 provision.go:84] configureAuth start
	I0816 10:23:43.272440    4656 main.go:141] libmachine: (ha-286000) Calling .GetMachineName
	I0816 10:23:43.272576    4656 main.go:141] libmachine: (ha-286000) Calling .GetIP
	I0816 10:23:43.272680    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:23:43.272769    4656 provision.go:143] copyHostCerts
	I0816 10:23:43.272801    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem
	I0816 10:23:43.272890    4656 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem, removing ...
	I0816 10:23:43.272898    4656 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem
	I0816 10:23:43.273149    4656 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem (1078 bytes)
	I0816 10:23:43.273406    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem
	I0816 10:23:43.273447    4656 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem, removing ...
	I0816 10:23:43.273452    4656 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem
	I0816 10:23:43.273542    4656 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem (1123 bytes)
	I0816 10:23:43.273700    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem
	I0816 10:23:43.273746    4656 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem, removing ...
	I0816 10:23:43.273751    4656 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem
	I0816 10:23:43.273833    4656 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem (1679 bytes)
	I0816 10:23:43.274002    4656 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem org=jenkins.ha-286000 san=[127.0.0.1 192.169.0.5 ha-286000 localhost minikube]
	I0816 10:23:43.350973    4656 provision.go:177] copyRemoteCerts
	I0816 10:23:43.351030    4656 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 10:23:43.351047    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:23:43.351198    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:23:43.351290    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:23:43.351418    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:23:43.351516    4656 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:23:43.390290    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0816 10:23:43.390367    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0816 10:23:43.409250    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0816 10:23:43.409310    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0816 10:23:43.428428    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0816 10:23:43.428486    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 10:23:43.447295    4656 provision.go:87] duration metric: took 174.931658ms to configureAuth
	I0816 10:23:43.447308    4656 buildroot.go:189] setting minikube options for container-runtime
	I0816 10:23:43.447492    4656 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:23:43.447506    4656 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:23:43.447636    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:23:43.447734    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:23:43.447819    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:23:43.447898    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:23:43.447976    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:23:43.448093    4656 main.go:141] libmachine: Using SSH client type: native
	I0816 10:23:43.448217    4656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e87ea0] 0x3e8ac00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:23:43.448225    4656 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0816 10:23:43.510056    4656 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0816 10:23:43.510072    4656 buildroot.go:70] root file system type: tmpfs
	I0816 10:23:43.510138    4656 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0816 10:23:43.510152    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:23:43.510280    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:23:43.510367    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:23:43.510466    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:23:43.510546    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:23:43.510704    4656 main.go:141] libmachine: Using SSH client type: native
	I0816 10:23:43.510847    4656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e87ea0] 0x3e8ac00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:23:43.510894    4656 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0816 10:23:43.585463    4656 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0816 10:23:43.585485    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:23:43.585612    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:23:43.585708    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:23:43.585797    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:23:43.585881    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:23:43.585994    4656 main.go:141] libmachine: Using SSH client type: native
	I0816 10:23:43.586142    4656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e87ea0] 0x3e8ac00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:23:43.586155    4656 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0816 10:23:45.281245    4656 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0816 10:23:45.281272    4656 machine.go:96] duration metric: took 13.233954511s to provisionDockerMachine
	I0816 10:23:45.281282    4656 start.go:293] postStartSetup for "ha-286000" (driver="hyperkit")
	I0816 10:23:45.281290    4656 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 10:23:45.281301    4656 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:23:45.281477    4656 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 10:23:45.281497    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:23:45.281579    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:23:45.281672    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:23:45.281756    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:23:45.281830    4656 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:23:45.322349    4656 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 10:23:45.325873    4656 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 10:23:45.325888    4656 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19461-1276/.minikube/addons for local assets ...
	I0816 10:23:45.326003    4656 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19461-1276/.minikube/files for local assets ...
	I0816 10:23:45.326184    4656 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> 18312.pem in /etc/ssl/certs
	I0816 10:23:45.326190    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> /etc/ssl/certs/18312.pem
	I0816 10:23:45.326400    4656 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 10:23:45.335377    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem --> /etc/ssl/certs/18312.pem (1708 bytes)
	I0816 10:23:45.364973    4656 start.go:296] duration metric: took 83.714414ms for postStartSetup
	I0816 10:23:45.365002    4656 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:23:45.365179    4656 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0816 10:23:45.365192    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:23:45.365284    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:23:45.365363    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:23:45.365463    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:23:45.365567    4656 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:23:45.403540    4656 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0816 10:23:45.403604    4656 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0816 10:23:45.456725    4656 fix.go:56] duration metric: took 13.637911557s for fixHost
	I0816 10:23:45.456746    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:23:45.456881    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:23:45.456970    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:23:45.457077    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:23:45.457170    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:23:45.457308    4656 main.go:141] libmachine: Using SSH client type: native
	I0816 10:23:45.457449    4656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e87ea0] 0x3e8ac00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:23:45.457456    4656 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 10:23:45.520497    4656 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723829025.657632114
	
	I0816 10:23:45.520510    4656 fix.go:216] guest clock: 1723829025.657632114
	I0816 10:23:45.520516    4656 fix.go:229] Guest: 2024-08-16 10:23:45.657632114 -0700 PDT Remote: 2024-08-16 10:23:45.456737 -0700 PDT m=+14.070866227 (delta=200.895114ms)
	I0816 10:23:45.520533    4656 fix.go:200] guest clock delta is within tolerance: 200.895114ms
	I0816 10:23:45.520536    4656 start.go:83] releasing machines lock for "ha-286000", held for 13.701786252s
	I0816 10:23:45.520558    4656 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:23:45.520685    4656 main.go:141] libmachine: (ha-286000) Calling .GetIP
	I0816 10:23:45.520780    4656 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:23:45.521071    4656 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:23:45.521183    4656 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:23:45.521258    4656 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 10:23:45.521295    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:23:45.521314    4656 ssh_runner.go:195] Run: cat /version.json
	I0816 10:23:45.521325    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:23:45.521385    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:23:45.521413    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:23:45.521478    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:23:45.521492    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:23:45.521569    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:23:45.521588    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:23:45.521684    4656 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:23:45.521698    4656 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:23:45.608738    4656 ssh_runner.go:195] Run: systemctl --version
	I0816 10:23:45.613819    4656 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 10:23:45.618009    4656 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 10:23:45.618054    4656 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 10:23:45.630928    4656 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 10:23:45.630940    4656 start.go:495] detecting cgroup driver to use...
	I0816 10:23:45.631050    4656 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 10:23:45.647297    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0816 10:23:45.656185    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0816 10:23:45.664870    4656 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0816 10:23:45.664909    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0816 10:23:45.673735    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 10:23:45.682541    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0816 10:23:45.691093    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 10:23:45.699692    4656 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 10:23:45.708389    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0816 10:23:45.717214    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0816 10:23:45.726031    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0816 10:23:45.734772    4656 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 10:23:45.742525    4656 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 10:23:45.750474    4656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:23:45.857037    4656 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0816 10:23:45.876038    4656 start.go:495] detecting cgroup driver to use...
	I0816 10:23:45.876115    4656 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0816 10:23:45.891371    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 10:23:45.904769    4656 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 10:23:45.925222    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 10:23:45.935653    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 10:23:45.946111    4656 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0816 10:23:45.966114    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 10:23:45.976753    4656 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 10:23:45.991951    4656 ssh_runner.go:195] Run: which cri-dockerd
	I0816 10:23:45.995087    4656 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0816 10:23:46.002262    4656 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0816 10:23:46.015662    4656 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0816 10:23:46.113010    4656 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0816 10:23:46.220102    4656 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0816 10:23:46.220181    4656 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0816 10:23:46.234448    4656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:23:46.327392    4656 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0816 10:23:48.670555    4656 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.343962753s)
	I0816 10:23:48.670612    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0816 10:23:48.681270    4656 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0816 10:23:48.694180    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0816 10:23:48.704525    4656 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0816 10:23:48.796386    4656 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0816 10:23:48.896301    4656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:23:49.015732    4656 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0816 10:23:49.029308    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0816 10:23:49.039437    4656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:23:49.133284    4656 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0816 10:23:49.196413    4656 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0816 10:23:49.196492    4656 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0816 10:23:49.200987    4656 start.go:563] Will wait 60s for crictl version
	I0816 10:23:49.201034    4656 ssh_runner.go:195] Run: which crictl
	I0816 10:23:49.204272    4656 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 10:23:49.229772    4656 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0816 10:23:49.229851    4656 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0816 10:23:49.247799    4656 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0816 10:23:49.310834    4656 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0816 10:23:49.310884    4656 main.go:141] libmachine: (ha-286000) Calling .GetIP
	I0816 10:23:49.311324    4656 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0816 10:23:49.315940    4656 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 10:23:49.325830    4656 kubeadm.go:883] updating cluster {Name:ha-286000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:ha-286000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false f
reshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 10:23:49.325921    4656 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 10:23:49.325979    4656 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0816 10:23:49.344604    4656 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0816 10:23:49.344616    4656 docker.go:615] Images already preloaded, skipping extraction
	I0816 10:23:49.344689    4656 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0816 10:23:49.358019    4656 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0816 10:23:49.358039    4656 cache_images.go:84] Images are preloaded, skipping loading
	I0816 10:23:49.358049    4656 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.0 docker true true} ...
	I0816 10:23:49.358133    4656 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-286000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-286000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 10:23:49.358200    4656 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0816 10:23:49.396733    4656 cni.go:84] Creating CNI manager for ""
	I0816 10:23:49.396746    4656 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0816 10:23:49.396758    4656 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 10:23:49.396773    4656 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-286000 NodeName:ha-286000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 10:23:49.396858    4656 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-286000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 10:23:49.396876    4656 kube-vip.go:115] generating kube-vip config ...
	I0816 10:23:49.396930    4656 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0816 10:23:49.409760    4656 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0816 10:23:49.409827    4656 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0816 10:23:49.409880    4656 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 10:23:49.417741    4656 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 10:23:49.417784    4656 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0816 10:23:49.425178    4656 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0816 10:23:49.438709    4656 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 10:23:49.451834    4656 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0816 10:23:49.465615    4656 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0816 10:23:49.478992    4656 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0816 10:23:49.481872    4656 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 10:23:49.491581    4656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:23:49.591270    4656 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 10:23:49.605166    4656 certs.go:68] Setting up /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000 for IP: 192.169.0.5
	I0816 10:23:49.605178    4656 certs.go:194] generating shared ca certs ...
	I0816 10:23:49.605204    4656 certs.go:226] acquiring lock for ca certs: {Name:mka549fbc467849834d2267da204028c49d88a3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:23:49.605373    4656 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key
	I0816 10:23:49.605447    4656 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key
	I0816 10:23:49.605458    4656 certs.go:256] generating profile certs ...
	I0816 10:23:49.605548    4656 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.key
	I0816 10:23:49.605569    4656 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.1abcce66
	I0816 10:23:49.605590    4656 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.1abcce66 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.7 192.169.0.254]
	I0816 10:23:49.872724    4656 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.1abcce66 ...
	I0816 10:23:49.872746    4656 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.1abcce66: {Name:mk52a3c288948ed76c5e0c3d52d6b4bf6d85dac4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:23:49.873234    4656 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.1abcce66 ...
	I0816 10:23:49.873246    4656 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.1abcce66: {Name:mk4d6d8f8e53e86a8e5b1aff2a47e28c9af375aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:23:49.873462    4656 certs.go:381] copying /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.1abcce66 -> /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt
	I0816 10:23:49.873670    4656 certs.go:385] copying /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.1abcce66 -> /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key
	I0816 10:23:49.873917    4656 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key
	I0816 10:23:49.873927    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0816 10:23:49.873950    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0816 10:23:49.873969    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0816 10:23:49.873988    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0816 10:23:49.874005    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0816 10:23:49.874022    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0816 10:23:49.874039    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0816 10:23:49.874056    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0816 10:23:49.874155    4656 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem (1338 bytes)
	W0816 10:23:49.874204    4656 certs.go:480] ignoring /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831_empty.pem, impossibly tiny 0 bytes
	I0816 10:23:49.874213    4656 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 10:23:49.874243    4656 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem (1078 bytes)
	I0816 10:23:49.874272    4656 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem (1123 bytes)
	I0816 10:23:49.874303    4656 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem (1679 bytes)
	I0816 10:23:49.874365    4656 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem (1708 bytes)
	I0816 10:23:49.874404    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem -> /usr/share/ca-certificates/1831.pem
	I0816 10:23:49.874426    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> /usr/share/ca-certificates/18312.pem
	I0816 10:23:49.874445    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:23:49.874951    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 10:23:49.894591    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0816 10:23:49.949362    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 10:23:50.001129    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 10:23:50.031447    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0816 10:23:50.051861    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 10:23:50.072126    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 10:23:50.092020    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 10:23:50.111735    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem --> /usr/share/ca-certificates/1831.pem (1338 bytes)
	I0816 10:23:50.131448    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem --> /usr/share/ca-certificates/18312.pem (1708 bytes)
	I0816 10:23:50.150204    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 10:23:50.170431    4656 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 10:23:50.183792    4656 ssh_runner.go:195] Run: openssl version
	I0816 10:23:50.188069    4656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18312.pem && ln -fs /usr/share/ca-certificates/18312.pem /etc/ssl/certs/18312.pem"
	I0816 10:23:50.196462    4656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18312.pem
	I0816 10:23:50.199930    4656 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 16:57 /usr/share/ca-certificates/18312.pem
	I0816 10:23:50.199966    4656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18312.pem
	I0816 10:23:50.204340    4656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18312.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 10:23:50.212595    4656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 10:23:50.220934    4656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:23:50.224472    4656 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 16:48 /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:23:50.224507    4656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:23:50.228762    4656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 10:23:50.237224    4656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1831.pem && ln -fs /usr/share/ca-certificates/1831.pem /etc/ssl/certs/1831.pem"
	I0816 10:23:50.245558    4656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1831.pem
	I0816 10:23:50.249052    4656 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 16:57 /usr/share/ca-certificates/1831.pem
	I0816 10:23:50.249090    4656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1831.pem
	I0816 10:23:50.253505    4656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1831.pem /etc/ssl/certs/51391683.0"
	I0816 10:23:50.261784    4656 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 10:23:50.265339    4656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 10:23:50.269761    4656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 10:23:50.273967    4656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 10:23:50.278404    4656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 10:23:50.282734    4656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 10:23:50.286959    4656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 10:23:50.291328    4656 kubeadm.go:392] StartCluster: {Name:ha-286000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:ha-286000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 10:23:50.291439    4656 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0816 10:23:50.308917    4656 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 10:23:50.316477    4656 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 10:23:50.316487    4656 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 10:23:50.316521    4656 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 10:23:50.324768    4656 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 10:23:50.325077    4656 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-286000" does not appear in /Users/jenkins/minikube-integration/19461-1276/kubeconfig
	I0816 10:23:50.325160    4656 kubeconfig.go:62] /Users/jenkins/minikube-integration/19461-1276/kubeconfig needs updating (will repair): [kubeconfig missing "ha-286000" cluster setting kubeconfig missing "ha-286000" context setting]
	I0816 10:23:50.325346    4656 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/kubeconfig: {Name:mk7f4491c8ec5cf96c96a5a1a5dbfba4301394a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:23:50.325844    4656 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19461-1276/kubeconfig
	I0816 10:23:50.326042    4656 kapi.go:59] client config for ha-286000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.key", CAFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x5540f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0816 10:23:50.326340    4656 cert_rotation.go:140] Starting client certificate rotation controller
	I0816 10:23:50.326539    4656 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 10:23:50.333744    4656 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.5
	I0816 10:23:50.333758    4656 kubeadm.go:597] duration metric: took 17.27164ms to restartPrimaryControlPlane
	I0816 10:23:50.333763    4656 kubeadm.go:394] duration metric: took 42.452811ms to StartCluster
	I0816 10:23:50.333775    4656 settings.go:142] acquiring lock: {Name:mk60e377244eac756871b74b6bd45115654652a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:23:50.333847    4656 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19461-1276/kubeconfig
	I0816 10:23:50.334196    4656 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/kubeconfig: {Name:mk7f4491c8ec5cf96c96a5a1a5dbfba4301394a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:23:50.334417    4656 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:23:50.334430    4656 start.go:241] waiting for startup goroutines ...
	I0816 10:23:50.334436    4656 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 10:23:50.334546    4656 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:23:50.378007    4656 out.go:177] * Enabled addons: 
	I0816 10:23:50.399051    4656 addons.go:510] duration metric: took 64.628768ms for enable addons: enabled=[]
	I0816 10:23:50.399122    4656 start.go:246] waiting for cluster config update ...
	I0816 10:23:50.399134    4656 start.go:255] writing updated cluster config ...
	I0816 10:23:50.421150    4656 out.go:201] 
	I0816 10:23:50.443594    4656 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:23:50.443722    4656 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:23:50.466091    4656 out.go:177] * Starting "ha-286000-m02" control-plane node in "ha-286000" cluster
	I0816 10:23:50.507896    4656 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 10:23:50.507978    4656 cache.go:56] Caching tarball of preloaded images
	I0816 10:23:50.508166    4656 preload.go:172] Found /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0816 10:23:50.508183    4656 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 10:23:50.508305    4656 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:23:50.509238    4656 start.go:360] acquireMachinesLock for ha-286000-m02: {Name:mk0510f0432b1e24fd07d899155e85dff8756d5a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:23:50.509340    4656 start.go:364] duration metric: took 77.349µs to acquireMachinesLock for "ha-286000-m02"
	I0816 10:23:50.509364    4656 start.go:96] Skipping create...Using existing machine configuration
	I0816 10:23:50.509373    4656 fix.go:54] fixHost starting: m02
	I0816 10:23:50.509785    4656 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:23:50.509813    4656 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:23:50.519278    4656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52185
	I0816 10:23:50.519808    4656 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:23:50.520224    4656 main.go:141] libmachine: Using API Version  1
	I0816 10:23:50.520241    4656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:23:50.520527    4656 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:23:50.520742    4656 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:23:50.520847    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetState
	I0816 10:23:50.520930    4656 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:23:50.521027    4656 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid from json: 4408
	I0816 10:23:50.521973    4656 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid 4408 missing from process table
	I0816 10:23:50.522001    4656 fix.go:112] recreateIfNeeded on ha-286000-m02: state=Stopped err=<nil>
	I0816 10:23:50.522008    4656 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	W0816 10:23:50.522113    4656 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 10:23:50.564905    4656 out.go:177] * Restarting existing hyperkit VM for "ha-286000-m02" ...
	I0816 10:23:50.585936    4656 main.go:141] libmachine: (ha-286000-m02) Calling .Start
	I0816 10:23:50.586207    4656 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:23:50.586317    4656 main.go:141] libmachine: (ha-286000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/hyperkit.pid
	I0816 10:23:50.588008    4656 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid 4408 missing from process table
	I0816 10:23:50.588025    4656 main.go:141] libmachine: (ha-286000-m02) DBG | pid 4408 is in state "Stopped"
	I0816 10:23:50.588043    4656 main.go:141] libmachine: (ha-286000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/hyperkit.pid...
	I0816 10:23:50.588412    4656 main.go:141] libmachine: (ha-286000-m02) DBG | Using UUID f7630301-2bc3-4935-a751-ac72999da031
	I0816 10:23:50.615912    4656 main.go:141] libmachine: (ha-286000-m02) DBG | Generated MAC 72:69:8f:11:68:1d
	I0816 10:23:50.615934    4656 main.go:141] libmachine: (ha-286000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000
	I0816 10:23:50.616061    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:50 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"f7630301-2bc3-4935-a751-ac72999da031", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003aca20)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 10:23:50.616091    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:50 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"f7630301-2bc3-4935-a751-ac72999da031", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003aca20)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 10:23:50.616153    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:50 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "f7630301-2bc3-4935-a751-ac72999da031", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/ha-286000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/bzimage,/Users/jenkins/minikube-integration/19461-1276/.minikube/machine
s/ha-286000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000"}
	I0816 10:23:50.616186    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:50 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U f7630301-2bc3-4935-a751-ac72999da031 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/ha-286000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/bzimage,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000"
	I0816 10:23:50.616197    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:50 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0816 10:23:50.617617    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:50 DEBUG: hyperkit: Pid is 4678
	I0816 10:23:50.618129    4656 main.go:141] libmachine: (ha-286000-m02) DBG | Attempt 0
	I0816 10:23:50.618145    4656 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:23:50.618226    4656 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid from json: 4678
	I0816 10:23:50.620253    4656 main.go:141] libmachine: (ha-286000-m02) DBG | Searching for 72:69:8f:11:68:1d in /var/db/dhcpd_leases ...
	I0816 10:23:50.620318    4656 main.go:141] libmachine: (ha-286000-m02) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0816 10:23:50.620334    4656 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:23:50.620349    4656 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dbd7}
	I0816 10:23:50.620388    4656 main.go:141] libmachine: (ha-286000-m02) DBG | Found match: 72:69:8f:11:68:1d
	I0816 10:23:50.620402    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetConfigRaw
	I0816 10:23:50.620404    4656 main.go:141] libmachine: (ha-286000-m02) DBG | IP: 192.169.0.6
	I0816 10:23:50.621061    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetIP
	I0816 10:23:50.621271    4656 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:23:50.621639    4656 machine.go:93] provisionDockerMachine start ...
	I0816 10:23:50.621648    4656 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:23:50.621787    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:23:50.621898    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:23:50.622018    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:23:50.622130    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:23:50.622215    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:23:50.622373    4656 main.go:141] libmachine: Using SSH client type: native
	I0816 10:23:50.622508    4656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e87ea0] 0x3e8ac00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:23:50.622515    4656 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 10:23:50.625610    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:50 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0816 10:23:50.635240    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:50 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0816 10:23:50.636222    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:23:50.636239    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:23:50.636256    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:23:50.636268    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:23:51.016978    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:51 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0816 10:23:51.016996    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:51 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0816 10:23:51.131867    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:51 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:23:51.131882    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:51 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:23:51.131905    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:51 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:23:51.131915    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:51 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:23:51.132722    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:51 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0816 10:23:51.132732    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:51 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0816 10:23:56.691144    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:56 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0816 10:23:56.691211    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:56 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0816 10:23:56.691221    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:56 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0816 10:23:56.715157    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:56 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0816 10:24:01.691628    4656 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 10:24:01.691659    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetMachineName
	I0816 10:24:01.691824    4656 buildroot.go:166] provisioning hostname "ha-286000-m02"
	I0816 10:24:01.691835    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetMachineName
	I0816 10:24:01.691933    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:24:01.692024    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:24:01.692118    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:24:01.692216    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:24:01.692322    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:24:01.692468    4656 main.go:141] libmachine: Using SSH client type: native
	I0816 10:24:01.692634    4656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e87ea0] 0x3e8ac00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:24:01.692662    4656 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-286000-m02 && echo "ha-286000-m02" | sudo tee /etc/hostname
	I0816 10:24:01.771215    4656 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-286000-m02
	
	I0816 10:24:01.771228    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:24:01.771358    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:24:01.771450    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:24:01.771545    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:24:01.771647    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:24:01.771778    4656 main.go:141] libmachine: Using SSH client type: native
	I0816 10:24:01.771942    4656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e87ea0] 0x3e8ac00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:24:01.771954    4656 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-286000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-286000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-286000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 10:24:01.843105    4656 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 10:24:01.843122    4656 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19461-1276/.minikube CaCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19461-1276/.minikube}
	I0816 10:24:01.843132    4656 buildroot.go:174] setting up certificates
	I0816 10:24:01.843138    4656 provision.go:84] configureAuth start
	I0816 10:24:01.843144    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetMachineName
	I0816 10:24:01.843278    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetIP
	I0816 10:24:01.843379    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:24:01.843473    4656 provision.go:143] copyHostCerts
	I0816 10:24:01.843506    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem
	I0816 10:24:01.843559    4656 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem, removing ...
	I0816 10:24:01.843565    4656 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem
	I0816 10:24:01.843699    4656 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem (1078 bytes)
	I0816 10:24:01.843904    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem
	I0816 10:24:01.843934    4656 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem, removing ...
	I0816 10:24:01.843938    4656 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem
	I0816 10:24:01.844006    4656 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem (1123 bytes)
	I0816 10:24:01.844155    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem
	I0816 10:24:01.844183    4656 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem, removing ...
	I0816 10:24:01.844188    4656 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem
	I0816 10:24:01.844260    4656 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem (1679 bytes)
	I0816 10:24:01.844439    4656 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem org=jenkins.ha-286000-m02 san=[127.0.0.1 192.169.0.6 ha-286000-m02 localhost minikube]
	I0816 10:24:02.337393    4656 provision.go:177] copyRemoteCerts
	I0816 10:24:02.337441    4656 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 10:24:02.337455    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:24:02.337604    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:24:02.337706    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:24:02.337804    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:24:02.337897    4656 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/id_rsa Username:docker}
	I0816 10:24:02.378639    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0816 10:24:02.378714    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 10:24:02.398417    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0816 10:24:02.398480    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0816 10:24:02.418213    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0816 10:24:02.418277    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0816 10:24:02.438096    4656 provision.go:87] duration metric: took 595.044673ms to configureAuth
	I0816 10:24:02.438110    4656 buildroot.go:189] setting minikube options for container-runtime
	I0816 10:24:02.438277    4656 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:24:02.438294    4656 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:24:02.438430    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:24:02.438542    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:24:02.438634    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:24:02.438711    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:24:02.438803    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:24:02.438923    4656 main.go:141] libmachine: Using SSH client type: native
	I0816 10:24:02.439049    4656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e87ea0] 0x3e8ac00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:24:02.439057    4656 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0816 10:24:02.506619    4656 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0816 10:24:02.506630    4656 buildroot.go:70] root file system type: tmpfs
	I0816 10:24:02.506699    4656 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0816 10:24:02.506717    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:24:02.506855    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:24:02.506952    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:24:02.507065    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:24:02.507163    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:24:02.507316    4656 main.go:141] libmachine: Using SSH client type: native
	I0816 10:24:02.507497    4656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e87ea0] 0x3e8ac00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:24:02.507542    4656 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0816 10:24:02.585569    4656 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0816 10:24:02.585592    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:24:02.585731    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:24:02.585811    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:24:02.585904    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:24:02.585995    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:24:02.586114    4656 main.go:141] libmachine: Using SSH client type: native
	I0816 10:24:02.586256    4656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e87ea0] 0x3e8ac00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:24:02.586268    4656 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0816 10:24:04.282251    4656 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0816 10:24:04.282267    4656 machine.go:96] duration metric: took 13.663433605s to provisionDockerMachine
	I0816 10:24:04.282274    4656 start.go:293] postStartSetup for "ha-286000-m02" (driver="hyperkit")
	I0816 10:24:04.282282    4656 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 10:24:04.282291    4656 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:24:04.282476    4656 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 10:24:04.282490    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:24:04.282590    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:24:04.282676    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:24:04.282759    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:24:04.282862    4656 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/id_rsa Username:docker}
	I0816 10:24:04.323177    4656 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 10:24:04.326227    4656 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 10:24:04.326238    4656 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19461-1276/.minikube/addons for local assets ...
	I0816 10:24:04.326327    4656 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19461-1276/.minikube/files for local assets ...
	I0816 10:24:04.326475    4656 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> 18312.pem in /etc/ssl/certs
	I0816 10:24:04.326481    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> /etc/ssl/certs/18312.pem
	I0816 10:24:04.326635    4656 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 10:24:04.333923    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem --> /etc/ssl/certs/18312.pem (1708 bytes)
	I0816 10:24:04.354007    4656 start.go:296] duration metric: took 71.735624ms for postStartSetup
	I0816 10:24:04.354029    4656 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:24:04.354205    4656 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0816 10:24:04.354219    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:24:04.354303    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:24:04.354400    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:24:04.354484    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:24:04.354570    4656 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/id_rsa Username:docker}
	I0816 10:24:04.394664    4656 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0816 10:24:04.394719    4656 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0816 10:24:04.426272    4656 fix.go:56] duration metric: took 13.919762029s for fixHost
	I0816 10:24:04.426298    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:24:04.426444    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:24:04.426552    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:24:04.426653    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:24:04.426754    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:24:04.426882    4656 main.go:141] libmachine: Using SSH client type: native
	I0816 10:24:04.427028    4656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e87ea0] 0x3e8ac00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:24:04.427036    4656 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 10:24:04.493811    4656 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723829044.518955224
	
	I0816 10:24:04.493822    4656 fix.go:216] guest clock: 1723829044.518955224
	I0816 10:24:04.493832    4656 fix.go:229] Guest: 2024-08-16 10:24:04.518955224 -0700 PDT Remote: 2024-08-16 10:24:04.426286 -0700 PDT m=+33.045019463 (delta=92.669224ms)
	I0816 10:24:04.493843    4656 fix.go:200] guest clock delta is within tolerance: 92.669224ms
	I0816 10:24:04.493847    4656 start.go:83] releasing machines lock for "ha-286000-m02", held for 13.987372778s
	I0816 10:24:04.493864    4656 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:24:04.494002    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetIP
	I0816 10:24:04.518312    4656 out.go:177] * Found network options:
	I0816 10:24:04.540563    4656 out.go:177]   - NO_PROXY=192.169.0.5
	W0816 10:24:04.562476    4656 proxy.go:119] fail to check proxy env: Error ip not in block
	I0816 10:24:04.562514    4656 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:24:04.563369    4656 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:24:04.563631    4656 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:24:04.563760    4656 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 10:24:04.563821    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	W0816 10:24:04.563878    4656 proxy.go:119] fail to check proxy env: Error ip not in block
	I0816 10:24:04.563978    4656 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0816 10:24:04.563994    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:24:04.563998    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:24:04.564194    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:24:04.564230    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:24:04.564370    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:24:04.564412    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:24:04.564603    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:24:04.564677    4656 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/id_rsa Username:docker}
	I0816 10:24:04.564735    4656 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/id_rsa Username:docker}
	W0816 10:24:04.601353    4656 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 10:24:04.601410    4656 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 10:24:04.653940    4656 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 10:24:04.653960    4656 start.go:495] detecting cgroup driver to use...
	I0816 10:24:04.654084    4656 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 10:24:04.669702    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0816 10:24:04.678676    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0816 10:24:04.687652    4656 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0816 10:24:04.687695    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0816 10:24:04.696611    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 10:24:04.705567    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0816 10:24:04.714412    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 10:24:04.723256    4656 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 10:24:04.732202    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0816 10:24:04.746674    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0816 10:24:04.757904    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0816 10:24:04.767905    4656 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 10:24:04.779013    4656 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 10:24:04.790474    4656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:24:04.892919    4656 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0816 10:24:04.911874    4656 start.go:495] detecting cgroup driver to use...
	I0816 10:24:04.911946    4656 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0816 10:24:04.929416    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 10:24:04.941191    4656 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 10:24:04.954835    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 10:24:04.965605    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 10:24:04.976040    4656 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0816 10:24:05.001090    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 10:24:05.011999    4656 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 10:24:05.026893    4656 ssh_runner.go:195] Run: which cri-dockerd
	I0816 10:24:05.029920    4656 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0816 10:24:05.037094    4656 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0816 10:24:05.050742    4656 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0816 10:24:05.142175    4656 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0816 10:24:05.247816    4656 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0816 10:24:05.247843    4656 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0816 10:24:05.261875    4656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:24:05.354182    4656 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0816 10:24:07.691138    4656 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.337231062s)
	I0816 10:24:07.691198    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0816 10:24:07.701875    4656 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0816 10:24:07.715113    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0816 10:24:07.725351    4656 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0816 10:24:07.820462    4656 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0816 10:24:07.932462    4656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:24:08.044265    4656 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0816 10:24:08.057914    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0816 10:24:08.069171    4656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:24:08.165855    4656 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0816 10:24:08.229743    4656 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0816 10:24:08.229822    4656 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0816 10:24:08.234625    4656 start.go:563] Will wait 60s for crictl version
	I0816 10:24:08.234677    4656 ssh_runner.go:195] Run: which crictl
	I0816 10:24:08.237852    4656 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 10:24:08.262491    4656 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0816 10:24:08.262569    4656 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0816 10:24:08.282005    4656 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0816 10:24:08.324107    4656 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0816 10:24:08.365750    4656 out.go:177]   - env NO_PROXY=192.169.0.5
	I0816 10:24:08.386602    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetIP
	I0816 10:24:08.387035    4656 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0816 10:24:08.391617    4656 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 10:24:08.401981    4656 mustload.go:65] Loading cluster: ha-286000
	I0816 10:24:08.402159    4656 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:24:08.402381    4656 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:24:08.402414    4656 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:24:08.411266    4656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52207
	I0816 10:24:08.411600    4656 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:24:08.411912    4656 main.go:141] libmachine: Using API Version  1
	I0816 10:24:08.411923    4656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:24:08.412158    4656 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:24:08.412273    4656 main.go:141] libmachine: (ha-286000) Calling .GetState
	I0816 10:24:08.412350    4656 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:24:08.412439    4656 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 4669
	I0816 10:24:08.413371    4656 host.go:66] Checking if "ha-286000" exists ...
	I0816 10:24:08.413648    4656 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:24:08.413671    4656 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:24:08.422352    4656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52209
	I0816 10:24:08.422710    4656 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:24:08.423035    4656 main.go:141] libmachine: Using API Version  1
	I0816 10:24:08.423046    4656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:24:08.423253    4656 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:24:08.423365    4656 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:24:08.423454    4656 certs.go:68] Setting up /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000 for IP: 192.169.0.6
	I0816 10:24:08.423460    4656 certs.go:194] generating shared ca certs ...
	I0816 10:24:08.423469    4656 certs.go:226] acquiring lock for ca certs: {Name:mka549fbc467849834d2267da204028c49d88a3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:24:08.423616    4656 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key
	I0816 10:24:08.423685    4656 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key
	I0816 10:24:08.423693    4656 certs.go:256] generating profile certs ...
	I0816 10:24:08.423785    4656 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.key
	I0816 10:24:08.423872    4656 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.df014ba6
	I0816 10:24:08.423924    4656 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key
	I0816 10:24:08.423931    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0816 10:24:08.423952    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0816 10:24:08.423978    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0816 10:24:08.423996    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0816 10:24:08.424013    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0816 10:24:08.424031    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0816 10:24:08.424049    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0816 10:24:08.424065    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0816 10:24:08.424139    4656 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem (1338 bytes)
	W0816 10:24:08.424181    4656 certs.go:480] ignoring /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831_empty.pem, impossibly tiny 0 bytes
	I0816 10:24:08.424189    4656 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 10:24:08.424243    4656 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem (1078 bytes)
	I0816 10:24:08.424278    4656 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem (1123 bytes)
	I0816 10:24:08.424308    4656 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem (1679 bytes)
	I0816 10:24:08.424377    4656 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem (1708 bytes)
	I0816 10:24:08.424414    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem -> /usr/share/ca-certificates/1831.pem
	I0816 10:24:08.424439    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> /usr/share/ca-certificates/18312.pem
	I0816 10:24:08.424464    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:24:08.424490    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:24:08.424585    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:24:08.424670    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:24:08.424754    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:24:08.424829    4656 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:24:08.455631    4656 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0816 10:24:08.459165    4656 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0816 10:24:08.467170    4656 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0816 10:24:08.470222    4656 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0816 10:24:08.478239    4656 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0816 10:24:08.481358    4656 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0816 10:24:08.489236    4656 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0816 10:24:08.492402    4656 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0816 10:24:08.500317    4656 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0816 10:24:08.503508    4656 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0816 10:24:08.511673    4656 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0816 10:24:08.514769    4656 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0816 10:24:08.522766    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 10:24:08.542887    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0816 10:24:08.562071    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 10:24:08.581743    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 10:24:08.600945    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0816 10:24:08.620933    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 10:24:08.640254    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 10:24:08.659444    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 10:24:08.678715    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem --> /usr/share/ca-certificates/1831.pem (1338 bytes)
	I0816 10:24:08.697527    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem --> /usr/share/ca-certificates/18312.pem (1708 bytes)
	I0816 10:24:08.716988    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 10:24:08.735913    4656 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0816 10:24:08.749507    4656 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0816 10:24:08.763125    4656 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0816 10:24:08.776902    4656 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0816 10:24:08.790611    4656 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0816 10:24:08.804538    4656 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0816 10:24:08.817970    4656 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0816 10:24:08.831472    4656 ssh_runner.go:195] Run: openssl version
	I0816 10:24:08.835773    4656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18312.pem && ln -fs /usr/share/ca-certificates/18312.pem /etc/ssl/certs/18312.pem"
	I0816 10:24:08.845139    4656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18312.pem
	I0816 10:24:08.848508    4656 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 16:57 /usr/share/ca-certificates/18312.pem
	I0816 10:24:08.848545    4656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18312.pem
	I0816 10:24:08.852837    4656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18312.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 10:24:08.861881    4656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 10:24:08.870959    4656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:24:08.874362    4656 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 16:48 /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:24:08.874393    4656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:24:08.878676    4656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 10:24:08.887721    4656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1831.pem && ln -fs /usr/share/ca-certificates/1831.pem /etc/ssl/certs/1831.pem"
	I0816 10:24:08.896767    4656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1831.pem
	I0816 10:24:08.900184    4656 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 16:57 /usr/share/ca-certificates/1831.pem
	I0816 10:24:08.900218    4656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1831.pem
	I0816 10:24:08.904590    4656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1831.pem /etc/ssl/certs/51391683.0"
	I0816 10:24:08.913817    4656 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 10:24:08.917320    4656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 10:24:08.921592    4656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 10:24:08.925840    4656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 10:24:08.930232    4656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 10:24:08.934401    4656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 10:24:08.938749    4656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 10:24:08.943061    4656 kubeadm.go:934] updating node {m02 192.169.0.6 8443 v1.31.0 docker true true} ...
	I0816 10:24:08.943117    4656 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-286000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-286000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 10:24:08.943138    4656 kube-vip.go:115] generating kube-vip config ...
	I0816 10:24:08.943173    4656 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0816 10:24:08.956099    4656 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0816 10:24:08.956137    4656 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0816 10:24:08.956187    4656 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 10:24:08.964732    4656 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 10:24:08.964780    4656 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0816 10:24:08.972962    4656 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0816 10:24:08.986351    4656 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 10:24:08.999555    4656 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0816 10:24:09.013514    4656 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0816 10:24:09.016494    4656 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 10:24:09.026607    4656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:24:09.119324    4656 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 10:24:09.134140    4656 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:24:09.134339    4656 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:24:09.155614    4656 out.go:177] * Verifying Kubernetes components...
	I0816 10:24:09.197468    4656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:24:09.303306    4656 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 10:24:09.318292    4656 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19461-1276/kubeconfig
	I0816 10:24:09.318481    4656 kapi.go:59] client config for ha-286000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.key", CAFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x5540f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0816 10:24:09.318519    4656 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0816 10:24:09.318689    4656 node_ready.go:35] waiting up to 6m0s for node "ha-286000-m02" to be "Ready" ...
	I0816 10:24:09.318767    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:24:09.318772    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:09.318780    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:09.318783    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:18.478519    4656 round_trippers.go:574] Response Status: 200 OK in 9160 milliseconds
	I0816 10:24:18.479788    4656 node_ready.go:49] node "ha-286000-m02" has status "Ready":"True"
	I0816 10:24:18.479801    4656 node_ready.go:38] duration metric: took 9.161930596s for node "ha-286000-m02" to be "Ready" ...
	I0816 10:24:18.479809    4656 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 10:24:18.479841    4656 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0816 10:24:18.479849    4656 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0816 10:24:18.479888    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0816 10:24:18.479893    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:18.479899    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:18.479903    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:18.524673    4656 round_trippers.go:574] Response Status: 200 OK in 44 milliseconds
	I0816 10:24:18.529733    4656 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-2kqjf" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:18.529785    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-2kqjf
	I0816 10:24:18.529790    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:18.529807    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:18.529813    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:18.533009    4656 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:24:18.533408    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:24:18.533415    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:18.533421    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:18.533425    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:18.536536    4656 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:24:18.536873    4656 pod_ready.go:93] pod "coredns-6f6b679f8f-2kqjf" in "kube-system" namespace has status "Ready":"True"
	I0816 10:24:18.536881    4656 pod_ready.go:82] duration metric: took 7.13625ms for pod "coredns-6f6b679f8f-2kqjf" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:18.536890    4656 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-rfbz7" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:18.536923    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-rfbz7
	I0816 10:24:18.536928    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:18.536933    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:18.536936    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:18.538881    4656 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:24:18.539268    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:24:18.539275    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:18.539280    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:18.539283    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:18.541207    4656 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:24:18.541586    4656 pod_ready.go:93] pod "coredns-6f6b679f8f-rfbz7" in "kube-system" namespace has status "Ready":"True"
	I0816 10:24:18.541594    4656 pod_ready.go:82] duration metric: took 4.698747ms for pod "coredns-6f6b679f8f-rfbz7" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:18.541600    4656 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:18.541631    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-286000
	I0816 10:24:18.541636    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:18.541641    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:18.541646    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:18.543814    4656 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:24:18.544226    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:24:18.544232    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:18.544238    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:18.544241    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:18.546294    4656 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:24:18.546667    4656 pod_ready.go:93] pod "etcd-ha-286000" in "kube-system" namespace has status "Ready":"True"
	I0816 10:24:18.546676    4656 pod_ready.go:82] duration metric: took 5.071416ms for pod "etcd-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:18.546683    4656 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:18.546714    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-286000-m02
	I0816 10:24:18.546719    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:18.546724    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:18.546727    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:18.548810    4656 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:24:18.549180    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:24:18.549187    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:18.549193    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:18.549196    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:18.551164    4656 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:24:18.551594    4656 pod_ready.go:93] pod "etcd-ha-286000-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 10:24:18.551602    4656 pod_ready.go:82] duration metric: took 4.914791ms for pod "etcd-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:18.551612    4656 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:18.551646    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-286000
	I0816 10:24:18.551651    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:18.551657    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:18.551661    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:18.553736    4656 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:24:18.680501    4656 request.go:632] Waited for 126.254478ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:24:18.680609    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:24:18.680620    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:18.680631    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:18.680639    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:18.684350    4656 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:24:18.684850    4656 pod_ready.go:93] pod "kube-apiserver-ha-286000" in "kube-system" namespace has status "Ready":"True"
	I0816 10:24:18.684859    4656 pod_ready.go:82] duration metric: took 133.250923ms for pod "kube-apiserver-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:18.684865    4656 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:18.880626    4656 request.go:632] Waited for 195.713304ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-286000-m02
	I0816 10:24:18.880742    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-286000-m02
	I0816 10:24:18.880753    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:18.880765    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:18.880778    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:18.884447    4656 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:24:19.081261    4656 request.go:632] Waited for 196.182218ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:24:19.081349    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:24:19.081358    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:19.081368    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:19.081377    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:19.085528    4656 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 10:24:19.085961    4656 pod_ready.go:93] pod "kube-apiserver-ha-286000-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 10:24:19.085970    4656 pod_ready.go:82] duration metric: took 401.129633ms for pod "kube-apiserver-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:19.085977    4656 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:19.279954    4656 request.go:632] Waited for 193.926578ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-286000
	I0816 10:24:19.279986    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-286000
	I0816 10:24:19.279991    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:19.279997    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:19.280003    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:19.283105    4656 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:24:19.480663    4656 request.go:632] Waited for 196.83909ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:24:19.480698    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:24:19.480704    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:19.480710    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:19.480728    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:19.483828    4656 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:24:19.484258    4656 pod_ready.go:93] pod "kube-controller-manager-ha-286000" in "kube-system" namespace has status "Ready":"True"
	I0816 10:24:19.484269    4656 pod_ready.go:82] duration metric: took 398.316107ms for pod "kube-controller-manager-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:19.484276    4656 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:19.681917    4656 request.go:632] Waited for 197.597037ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-286000-m02
	I0816 10:24:19.682075    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-286000-m02
	I0816 10:24:19.682091    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:19.682103    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:19.682113    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:19.686127    4656 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 10:24:19.880667    4656 request.go:632] Waited for 193.865313ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:24:19.880730    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:24:19.880736    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:19.880742    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:19.880750    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:19.884780    4656 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 10:24:19.885298    4656 pod_ready.go:93] pod "kube-controller-manager-ha-286000-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 10:24:19.885308    4656 pod_ready.go:82] duration metric: took 401.055356ms for pod "kube-controller-manager-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:19.885315    4656 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5qhgk" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:20.081205    4656 request.go:632] Waited for 195.805147ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5qhgk
	I0816 10:24:20.081294    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5qhgk
	I0816 10:24:20.081304    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:20.081316    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:20.081321    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:20.085631    4656 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 10:24:20.280455    4656 request.go:632] Waited for 194.474574ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000-m04
	I0816 10:24:20.280532    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m04
	I0816 10:24:20.280539    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:20.280547    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:20.280552    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:20.287097    4656 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0816 10:24:20.287492    4656 pod_ready.go:93] pod "kube-proxy-5qhgk" in "kube-system" namespace has status "Ready":"True"
	I0816 10:24:20.287501    4656 pod_ready.go:82] duration metric: took 402.209883ms for pod "kube-proxy-5qhgk" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:20.287508    4656 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pt669" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:20.480572    4656 request.go:632] Waited for 193.037822ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pt669
	I0816 10:24:20.480648    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pt669
	I0816 10:24:20.480652    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:20.480659    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:20.480663    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:20.483171    4656 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:24:20.681664    4656 request.go:632] Waited for 198.111953ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:24:20.681763    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:24:20.681771    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:20.681779    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:20.681784    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:20.684372    4656 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:24:20.684693    4656 pod_ready.go:93] pod "kube-proxy-pt669" in "kube-system" namespace has status "Ready":"True"
	I0816 10:24:20.684702    4656 pod_ready.go:82] duration metric: took 397.216841ms for pod "kube-proxy-pt669" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:20.684712    4656 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-w4nt2" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:20.879782    4656 request.go:632] Waited for 195.039009ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w4nt2
	I0816 10:24:20.879907    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w4nt2
	I0816 10:24:20.879921    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:20.879933    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:20.879941    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:20.883394    4656 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:24:21.079930    4656 request.go:632] Waited for 195.888686ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:24:21.080028    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:24:21.080039    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:21.080050    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:21.080059    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:21.083488    4656 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:24:21.083893    4656 pod_ready.go:93] pod "kube-proxy-w4nt2" in "kube-system" namespace has status "Ready":"True"
	I0816 10:24:21.083903    4656 pod_ready.go:82] duration metric: took 399.212461ms for pod "kube-proxy-w4nt2" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:21.083911    4656 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:21.281558    4656 request.go:632] Waited for 197.607208ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-286000
	I0816 10:24:21.281628    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-286000
	I0816 10:24:21.281639    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:21.281648    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:21.281654    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:21.284223    4656 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:24:21.480419    4656 request.go:632] Waited for 195.838756ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:24:21.480514    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:24:21.480525    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:21.480537    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:21.480544    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:21.483887    4656 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:24:21.484430    4656 pod_ready.go:93] pod "kube-scheduler-ha-286000" in "kube-system" namespace has status "Ready":"True"
	I0816 10:24:21.484439    4656 pod_ready.go:82] duration metric: took 400.549346ms for pod "kube-scheduler-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:21.484446    4656 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:21.679727    4656 request.go:632] Waited for 195.252345ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-286000-m02
	I0816 10:24:21.679760    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-286000-m02
	I0816 10:24:21.679765    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:21.679769    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:21.679805    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:21.686476    4656 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0816 10:24:21.880162    4656 request.go:632] Waited for 193.203193ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:24:21.880221    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:24:21.880231    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:21.880247    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:21.880256    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:21.884015    4656 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:24:21.884602    4656 pod_ready.go:93] pod "kube-scheduler-ha-286000-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 10:24:21.884611    4656 pod_ready.go:82] duration metric: took 400.186514ms for pod "kube-scheduler-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:21.884619    4656 pod_ready.go:39] duration metric: took 3.405043457s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 10:24:21.884636    4656 api_server.go:52] waiting for apiserver process to appear ...
	I0816 10:24:21.884692    4656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 10:24:21.896175    4656 api_server.go:72] duration metric: took 12.763101701s to wait for apiserver process to appear ...
	I0816 10:24:21.896187    4656 api_server.go:88] waiting for apiserver healthz status ...
	I0816 10:24:21.896203    4656 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0816 10:24:21.900677    4656 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0816 10:24:21.900711    4656 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0816 10:24:21.900715    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:21.900720    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:21.900725    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:21.901496    4656 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0816 10:24:21.901599    4656 api_server.go:141] control plane version: v1.31.0
	I0816 10:24:21.901609    4656 api_server.go:131] duration metric: took 5.41777ms to wait for apiserver health ...
	I0816 10:24:21.901617    4656 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 10:24:22.081425    4656 request.go:632] Waited for 179.775499ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0816 10:24:22.081509    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0816 10:24:22.081521    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:22.081533    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:22.081542    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:22.087308    4656 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0816 10:24:22.090908    4656 system_pods.go:59] 19 kube-system pods found
	I0816 10:24:22.090924    4656 system_pods.go:61] "coredns-6f6b679f8f-2kqjf" [be18cd09-3e3b-4749-bf29-7001a879f593] Running
	I0816 10:24:22.090929    4656 system_pods.go:61] "coredns-6f6b679f8f-rfbz7" [a593e27a-e38b-46c7-a603-44963c31c095] Running
	I0816 10:24:22.090932    4656 system_pods.go:61] "etcd-ha-286000" [c45bc623-befe-4e88-8da1-bb4f7d7be5b2] Running
	I0816 10:24:22.090935    4656 system_pods.go:61] "etcd-ha-286000-m02" [e14fbe0c-4527-4cbb-8c13-01c0b32a8d15] Running
	I0816 10:24:22.090938    4656 system_pods.go:61] "kindnet-b9r6s" [07db3ed9-4355-45a2-be01-15273d773c65] Running
	I0816 10:24:22.090940    4656 system_pods.go:61] "kindnet-t9kjf" [20c57f28-5e86-44a4-8a2a-07e1e240abf2] Running
	I0816 10:24:22.090943    4656 system_pods.go:61] "kindnet-whqxb" [1cc5291b-52ea-44b5-b87c-607d46b5281a] Running
	I0816 10:24:22.090946    4656 system_pods.go:61] "kube-apiserver-ha-286000" [c0d298a9-931b-4084-9e5f-01a88de27456] Running
	I0816 10:24:22.090949    4656 system_pods.go:61] "kube-apiserver-ha-286000-m02" [3666c131-2b88-483a-ab8c-b18cb8db7d6f] Running
	I0816 10:24:22.090952    4656 system_pods.go:61] "kube-controller-manager-ha-286000" [5a4f4c5d-d89a-4e5f-b022-479ca31f64cd] Running
	I0816 10:24:22.090954    4656 system_pods.go:61] "kube-controller-manager-ha-286000-m02" [a347c3d4-fa47-49ea-93a0-83087017a2bd] Running
	I0816 10:24:22.090957    4656 system_pods.go:61] "kube-proxy-5qhgk" [da0cb383-9e0b-44cc-9f79-7861029514f7] Running
	I0816 10:24:22.090959    4656 system_pods.go:61] "kube-proxy-pt669" [798c956f-8f08-465a-b0d9-90b65f703f80] Running
	I0816 10:24:22.090962    4656 system_pods.go:61] "kube-proxy-w4nt2" [79ce2248-f8fd-4a3b-aeb7-f81d7ad64564] Running
	I0816 10:24:22.090967    4656 system_pods.go:61] "kube-scheduler-ha-286000" [b4af80e0-cbb0-4257-a212-3585faff0875] Running
	I0816 10:24:22.090971    4656 system_pods.go:61] "kube-scheduler-ha-286000-m02" [89920e93-c959-47ec-8a2c-f2b63e8c3d3a] Running
	I0816 10:24:22.090973    4656 system_pods.go:61] "kube-vip-ha-286000" [b0f85be2-2afb-44b0-a701-161b24529349] Running
	I0816 10:24:22.090976    4656 system_pods.go:61] "kube-vip-ha-286000-m02" [4982dc53-946d-4f75-8e9e-452ef39ff2fa] Running
	I0816 10:24:22.090978    4656 system_pods.go:61] "storage-provisioner" [4805d53b-2db3-4092-a3f2-d4a854e93adc] Running
	I0816 10:24:22.090983    4656 system_pods.go:74] duration metric: took 189.374292ms to wait for pod list to return data ...
	I0816 10:24:22.090989    4656 default_sa.go:34] waiting for default service account to be created ...
	I0816 10:24:22.280932    4656 request.go:632] Waited for 189.91131ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0816 10:24:22.280992    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0816 10:24:22.280998    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:22.281004    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:22.281007    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:22.286126    4656 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0816 10:24:22.286303    4656 default_sa.go:45] found service account: "default"
	I0816 10:24:22.286313    4656 default_sa.go:55] duration metric: took 195.332329ms for default service account to be created ...
	I0816 10:24:22.286320    4656 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 10:24:22.480087    4656 request.go:632] Waited for 193.706904ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0816 10:24:22.480149    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0816 10:24:22.480160    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:22.480172    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:22.480181    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:22.486391    4656 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0816 10:24:22.490416    4656 system_pods.go:86] 19 kube-system pods found
	I0816 10:24:22.490428    4656 system_pods.go:89] "coredns-6f6b679f8f-2kqjf" [be18cd09-3e3b-4749-bf29-7001a879f593] Running
	I0816 10:24:22.490432    4656 system_pods.go:89] "coredns-6f6b679f8f-rfbz7" [a593e27a-e38b-46c7-a603-44963c31c095] Running
	I0816 10:24:22.490435    4656 system_pods.go:89] "etcd-ha-286000" [c45bc623-befe-4e88-8da1-bb4f7d7be5b2] Running
	I0816 10:24:22.490443    4656 system_pods.go:89] "etcd-ha-286000-m02" [e14fbe0c-4527-4cbb-8c13-01c0b32a8d15] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0816 10:24:22.490447    4656 system_pods.go:89] "kindnet-b9r6s" [07db3ed9-4355-45a2-be01-15273d773c65] Running
	I0816 10:24:22.490454    4656 system_pods.go:89] "kindnet-t9kjf" [20c57f28-5e86-44a4-8a2a-07e1e240abf2] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0816 10:24:22.490458    4656 system_pods.go:89] "kindnet-whqxb" [1cc5291b-52ea-44b5-b87c-607d46b5281a] Running
	I0816 10:24:22.490462    4656 system_pods.go:89] "kube-apiserver-ha-286000" [c0d298a9-931b-4084-9e5f-01a88de27456] Running
	I0816 10:24:22.490466    4656 system_pods.go:89] "kube-apiserver-ha-286000-m02" [3666c131-2b88-483a-ab8c-b18cb8db7d6f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0816 10:24:22.490469    4656 system_pods.go:89] "kube-controller-manager-ha-286000" [5a4f4c5d-d89a-4e5f-b022-479ca31f64cd] Running
	I0816 10:24:22.490478    4656 system_pods.go:89] "kube-controller-manager-ha-286000-m02" [a347c3d4-fa47-49ea-93a0-83087017a2bd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0816 10:24:22.490483    4656 system_pods.go:89] "kube-proxy-5qhgk" [da0cb383-9e0b-44cc-9f79-7861029514f7] Running
	I0816 10:24:22.490487    4656 system_pods.go:89] "kube-proxy-pt669" [798c956f-8f08-465a-b0d9-90b65f703f80] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0816 10:24:22.490496    4656 system_pods.go:89] "kube-proxy-w4nt2" [79ce2248-f8fd-4a3b-aeb7-f81d7ad64564] Running
	I0816 10:24:22.490499    4656 system_pods.go:89] "kube-scheduler-ha-286000" [b4af80e0-cbb0-4257-a212-3585faff0875] Running
	I0816 10:24:22.490503    4656 system_pods.go:89] "kube-scheduler-ha-286000-m02" [89920e93-c959-47ec-8a2c-f2b63e8c3d3a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 10:24:22.490507    4656 system_pods.go:89] "kube-vip-ha-286000" [b0f85be2-2afb-44b0-a701-161b24529349] Running
	I0816 10:24:22.490511    4656 system_pods.go:89] "kube-vip-ha-286000-m02" [4982dc53-946d-4f75-8e9e-452ef39ff2fa] Running
	I0816 10:24:22.490514    4656 system_pods.go:89] "storage-provisioner" [4805d53b-2db3-4092-a3f2-d4a854e93adc] Running
	I0816 10:24:22.490518    4656 system_pods.go:126] duration metric: took 204.207739ms to wait for k8s-apps to be running ...
	I0816 10:24:22.490523    4656 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 10:24:22.490574    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 10:24:22.501971    4656 system_svc.go:56] duration metric: took 11.445041ms WaitForService to wait for kubelet
	I0816 10:24:22.501986    4656 kubeadm.go:582] duration metric: took 13.368953512s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 10:24:22.501997    4656 node_conditions.go:102] verifying NodePressure condition ...
	I0816 10:24:22.681633    4656 request.go:632] Waited for 179.608953ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0816 10:24:22.681696    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0816 10:24:22.681702    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:22.681708    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:22.681744    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:22.684771    4656 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:24:22.685508    4656 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 10:24:22.685523    4656 node_conditions.go:123] node cpu capacity is 2
	I0816 10:24:22.685532    4656 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 10:24:22.685535    4656 node_conditions.go:123] node cpu capacity is 2
	I0816 10:24:22.685538    4656 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 10:24:22.685541    4656 node_conditions.go:123] node cpu capacity is 2
	I0816 10:24:22.685544    4656 node_conditions.go:105] duration metric: took 183.55481ms to run NodePressure ...
	I0816 10:24:22.685552    4656 start.go:241] waiting for startup goroutines ...
	I0816 10:24:22.685571    4656 start.go:255] writing updated cluster config ...
	I0816 10:24:22.707964    4656 out.go:201] 
	I0816 10:24:22.729754    4656 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:24:22.729889    4656 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:24:22.752182    4656 out.go:177] * Starting "ha-286000-m03" control-plane node in "ha-286000" cluster
	I0816 10:24:22.794355    4656 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 10:24:22.794388    4656 cache.go:56] Caching tarball of preloaded images
	I0816 10:24:22.794595    4656 preload.go:172] Found /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0816 10:24:22.794623    4656 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 10:24:22.794796    4656 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:24:22.870926    4656 start.go:360] acquireMachinesLock for ha-286000-m03: {Name:mk0510f0432b1e24fd07d899155e85dff8756d5a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:24:22.871061    4656 start.go:364] duration metric: took 106.312µs to acquireMachinesLock for "ha-286000-m03"
	I0816 10:24:22.871092    4656 start.go:96] Skipping create...Using existing machine configuration
	I0816 10:24:22.871102    4656 fix.go:54] fixHost starting: m03
	I0816 10:24:22.871530    4656 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:24:22.871567    4656 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:24:22.881793    4656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52214
	I0816 10:24:22.882176    4656 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:24:22.882559    4656 main.go:141] libmachine: Using API Version  1
	I0816 10:24:22.882581    4656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:24:22.882800    4656 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:24:22.882926    4656 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:24:22.883020    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetState
	I0816 10:24:22.883103    4656 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:24:22.883215    4656 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid from json: 3849
	I0816 10:24:22.884141    4656 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid 3849 missing from process table
	I0816 10:24:22.884173    4656 fix.go:112] recreateIfNeeded on ha-286000-m03: state=Stopped err=<nil>
	I0816 10:24:22.884183    4656 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	W0816 10:24:22.884273    4656 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 10:24:22.934970    4656 out.go:177] * Restarting existing hyperkit VM for "ha-286000-m03" ...
	I0816 10:24:22.989195    4656 main.go:141] libmachine: (ha-286000-m03) Calling .Start
	I0816 10:24:22.989384    4656 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:24:22.989428    4656 main.go:141] libmachine: (ha-286000-m03) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/hyperkit.pid
	I0816 10:24:22.990416    4656 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid 3849 missing from process table
	I0816 10:24:22.990433    4656 main.go:141] libmachine: (ha-286000-m03) DBG | pid 3849 is in state "Stopped"
	I0816 10:24:22.990450    4656 main.go:141] libmachine: (ha-286000-m03) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/hyperkit.pid...
	I0816 10:24:22.991046    4656 main.go:141] libmachine: (ha-286000-m03) DBG | Using UUID 408efb18-ff91-428f-b414-6eb0d982a779
	I0816 10:24:23.018344    4656 main.go:141] libmachine: (ha-286000-m03) DBG | Generated MAC 8a:e:de:5b:b5:8b
	I0816 10:24:23.018367    4656 main.go:141] libmachine: (ha-286000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000
	I0816 10:24:23.018512    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:23 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"408efb18-ff91-428f-b414-6eb0d982a779", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003acb40)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 10:24:23.018542    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:23 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"408efb18-ff91-428f-b414-6eb0d982a779", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003acb40)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 10:24:23.018607    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:23 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "408efb18-ff91-428f-b414-6eb0d982a779", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/ha-286000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/bzimage,/Users/jenkins/minikube-integration/19461-1276/.minikube/machine
s/ha-286000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000"}
	I0816 10:24:23.018646    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:23 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 408efb18-ff91-428f-b414-6eb0d982a779 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/ha-286000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/bzimage,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000"
	I0816 10:24:23.018659    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:23 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0816 10:24:23.019982    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:23 DEBUG: hyperkit: Pid is 4694
	I0816 10:24:23.020375    4656 main.go:141] libmachine: (ha-286000-m03) DBG | Attempt 0
	I0816 10:24:23.020392    4656 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:24:23.020487    4656 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid from json: 4694
	I0816 10:24:23.022453    4656 main.go:141] libmachine: (ha-286000-m03) DBG | Searching for 8a:e:de:5b:b5:8b in /var/db/dhcpd_leases ...
	I0816 10:24:23.022498    4656 main.go:141] libmachine: (ha-286000-m03) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0816 10:24:23.022517    4656 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:24:23.022531    4656 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:24:23.022542    4656 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:24:23.022552    4656 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66c0d7f9}
	I0816 10:24:23.022566    4656 main.go:141] libmachine: (ha-286000-m03) DBG | Found match: 8a:e:de:5b:b5:8b
	I0816 10:24:23.022574    4656 main.go:141] libmachine: (ha-286000-m03) DBG | IP: 192.169.0.7
	I0816 10:24:23.022592    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetConfigRaw
	I0816 10:24:23.023252    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetIP
	I0816 10:24:23.023444    4656 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:24:23.023931    4656 machine.go:93] provisionDockerMachine start ...
	I0816 10:24:23.023941    4656 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:24:23.024079    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:24:23.024190    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:24:23.024302    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:24:23.024432    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:24:23.024554    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:24:23.024692    4656 main.go:141] libmachine: Using SSH client type: native
	I0816 10:24:23.024832    4656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e87ea0] 0x3e8ac00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:24:23.024839    4656 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 10:24:23.028441    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:23 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0816 10:24:23.037003    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:23 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0816 10:24:23.038503    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:24:23.038539    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:24:23.038554    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:24:23.038589    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:24:23.422756    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:23 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0816 10:24:23.422770    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:23 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0816 10:24:23.537534    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:24:23.537554    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:24:23.537563    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:24:23.537570    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:24:23.538449    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:23 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0816 10:24:23.538460    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:23 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0816 10:24:29.168490    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:29 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0816 10:24:29.168581    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:29 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0816 10:24:29.168594    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:29 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0816 10:24:29.192004    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:29 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0816 10:24:58.091940    4656 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 10:24:58.091955    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetMachineName
	I0816 10:24:58.092103    4656 buildroot.go:166] provisioning hostname "ha-286000-m03"
	I0816 10:24:58.092114    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetMachineName
	I0816 10:24:58.092224    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:24:58.092330    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:24:58.092419    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:24:58.092518    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:24:58.092626    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:24:58.092758    4656 main.go:141] libmachine: Using SSH client type: native
	I0816 10:24:58.092916    4656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e87ea0] 0x3e8ac00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:24:58.092925    4656 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-286000-m03 && echo "ha-286000-m03" | sudo tee /etc/hostname
	I0816 10:24:58.165459    4656 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-286000-m03
	
	I0816 10:24:58.165475    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:24:58.165609    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:24:58.165705    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:24:58.165800    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:24:58.165888    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:24:58.166012    4656 main.go:141] libmachine: Using SSH client type: native
	I0816 10:24:58.166160    4656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e87ea0] 0x3e8ac00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:24:58.166171    4656 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-286000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-286000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-286000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 10:24:58.234524    4656 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 10:24:58.234539    4656 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19461-1276/.minikube CaCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19461-1276/.minikube}
	I0816 10:24:58.234548    4656 buildroot.go:174] setting up certificates
	I0816 10:24:58.234555    4656 provision.go:84] configureAuth start
	I0816 10:24:58.234562    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetMachineName
	I0816 10:24:58.234691    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetIP
	I0816 10:24:58.234792    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:24:58.234865    4656 provision.go:143] copyHostCerts
	I0816 10:24:58.234895    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem
	I0816 10:24:58.234961    4656 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem, removing ...
	I0816 10:24:58.234967    4656 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem
	I0816 10:24:58.235111    4656 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem (1078 bytes)
	I0816 10:24:58.235314    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem
	I0816 10:24:58.235356    4656 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem, removing ...
	I0816 10:24:58.235361    4656 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem
	I0816 10:24:58.235442    4656 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem (1123 bytes)
	I0816 10:24:58.235582    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem
	I0816 10:24:58.235624    4656 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem, removing ...
	I0816 10:24:58.235629    4656 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem
	I0816 10:24:58.235704    4656 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem (1679 bytes)
	I0816 10:24:58.235845    4656 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem org=jenkins.ha-286000-m03 san=[127.0.0.1 192.169.0.7 ha-286000-m03 localhost minikube]
	I0816 10:24:58.291944    4656 provision.go:177] copyRemoteCerts
	I0816 10:24:58.291996    4656 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 10:24:58.292012    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:24:58.292152    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:24:58.292249    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:24:58.292325    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:24:58.292403    4656 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/id_rsa Username:docker}
	I0816 10:24:58.328961    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0816 10:24:58.329060    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0816 10:24:58.348824    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0816 10:24:58.348900    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0816 10:24:58.369137    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0816 10:24:58.369210    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 10:24:58.388899    4656 provision.go:87] duration metric: took 154.336521ms to configureAuth
	I0816 10:24:58.388918    4656 buildroot.go:189] setting minikube options for container-runtime
	I0816 10:24:58.389098    4656 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:24:58.389135    4656 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:24:58.389270    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:24:58.389362    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:24:58.389460    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:24:58.389543    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:24:58.389622    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:24:58.389731    4656 main.go:141] libmachine: Using SSH client type: native
	I0816 10:24:58.389859    4656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e87ea0] 0x3e8ac00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:24:58.389867    4656 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0816 10:24:58.452406    4656 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0816 10:24:58.452425    4656 buildroot.go:70] root file system type: tmpfs
	I0816 10:24:58.452504    4656 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0816 10:24:58.452516    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:24:58.452651    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:24:58.452745    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:24:58.452844    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:24:58.452943    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:24:58.453082    4656 main.go:141] libmachine: Using SSH client type: native
	I0816 10:24:58.453228    4656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e87ea0] 0x3e8ac00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:24:58.453271    4656 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0816 10:24:58.524937    4656 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0816 10:24:58.524958    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:24:58.525096    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:24:58.525191    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:24:58.525277    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:24:58.525354    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:24:58.525485    4656 main.go:141] libmachine: Using SSH client type: native
	I0816 10:24:58.525630    4656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e87ea0] 0x3e8ac00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:24:58.525643    4656 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0816 10:25:00.070144    4656 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0816 10:25:00.070159    4656 machine.go:96] duration metric: took 37.04784939s to provisionDockerMachine
	I0816 10:25:00.070167    4656 start.go:293] postStartSetup for "ha-286000-m03" (driver="hyperkit")
	I0816 10:25:00.070174    4656 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 10:25:00.070189    4656 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:25:00.070367    4656 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 10:25:00.070380    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:25:00.070472    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:25:00.070550    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:25:00.070650    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:25:00.070738    4656 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/id_rsa Username:docker}
	I0816 10:25:00.107373    4656 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 10:25:00.110616    4656 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 10:25:00.110628    4656 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19461-1276/.minikube/addons for local assets ...
	I0816 10:25:00.110727    4656 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19461-1276/.minikube/files for local assets ...
	I0816 10:25:00.110900    4656 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> 18312.pem in /etc/ssl/certs
	I0816 10:25:00.110906    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> /etc/ssl/certs/18312.pem
	I0816 10:25:00.111116    4656 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 10:25:00.118270    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem --> /etc/ssl/certs/18312.pem (1708 bytes)
	I0816 10:25:00.138002    4656 start.go:296] duration metric: took 67.828962ms for postStartSetup
	I0816 10:25:00.138023    4656 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:25:00.138205    4656 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0816 10:25:00.138223    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:25:00.138316    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:25:00.138399    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:25:00.138484    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:25:00.138558    4656 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/id_rsa Username:docker}
	I0816 10:25:00.176923    4656 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0816 10:25:00.176990    4656 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0816 10:25:00.228121    4656 fix.go:56] duration metric: took 37.358659467s for fixHost
	I0816 10:25:00.228163    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:25:00.228436    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:25:00.228658    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:25:00.228845    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:25:00.229035    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:25:00.229265    4656 main.go:141] libmachine: Using SSH client type: native
	I0816 10:25:00.229477    4656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e87ea0] 0x3e8ac00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:25:00.229490    4656 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 10:25:00.290756    4656 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723829100.434156000
	
	I0816 10:25:00.290771    4656 fix.go:216] guest clock: 1723829100.434156000
	I0816 10:25:00.290778    4656 fix.go:229] Guest: 2024-08-16 10:25:00.434156 -0700 PDT Remote: 2024-08-16 10:25:00.228148 -0700 PDT m=+88.850268934 (delta=206.008ms)
	I0816 10:25:00.290788    4656 fix.go:200] guest clock delta is within tolerance: 206.008ms
	I0816 10:25:00.290792    4656 start.go:83] releasing machines lock for "ha-286000-m03", held for 37.421364862s
	I0816 10:25:00.290808    4656 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:25:00.290938    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetIP
	I0816 10:25:00.313666    4656 out.go:177] * Found network options:
	I0816 10:25:00.334418    4656 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6
	W0816 10:25:00.355435    4656 proxy.go:119] fail to check proxy env: Error ip not in block
	W0816 10:25:00.355461    4656 proxy.go:119] fail to check proxy env: Error ip not in block
	I0816 10:25:00.355478    4656 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:25:00.356143    4656 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:25:00.356356    4656 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:25:00.356474    4656 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 10:25:00.356513    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	W0816 10:25:00.356569    4656 proxy.go:119] fail to check proxy env: Error ip not in block
	W0816 10:25:00.356590    4656 proxy.go:119] fail to check proxy env: Error ip not in block
	I0816 10:25:00.356679    4656 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0816 10:25:00.356698    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:25:00.356711    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:25:00.356905    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:25:00.356940    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:25:00.357121    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:25:00.357153    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:25:00.357335    4656 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/id_rsa Username:docker}
	I0816 10:25:00.357342    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:25:00.357519    4656 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/id_rsa Username:docker}
	W0816 10:25:00.391006    4656 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 10:25:00.391060    4656 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 10:25:00.439137    4656 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 10:25:00.439154    4656 start.go:495] detecting cgroup driver to use...
	I0816 10:25:00.439231    4656 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 10:25:00.454661    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0816 10:25:00.463185    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0816 10:25:00.471601    4656 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0816 10:25:00.471658    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0816 10:25:00.480421    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 10:25:00.488812    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0816 10:25:00.497664    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 10:25:00.506080    4656 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 10:25:00.514726    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0816 10:25:00.523293    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0816 10:25:00.531650    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0816 10:25:00.540020    4656 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 10:25:00.547503    4656 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 10:25:00.555089    4656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:25:00.643202    4656 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0816 10:25:00.663102    4656 start.go:495] detecting cgroup driver to use...
	I0816 10:25:00.663170    4656 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0816 10:25:00.680492    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 10:25:00.693170    4656 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 10:25:00.707541    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 10:25:00.718044    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 10:25:00.728609    4656 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0816 10:25:00.747431    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 10:25:00.757669    4656 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 10:25:00.772722    4656 ssh_runner.go:195] Run: which cri-dockerd
	I0816 10:25:00.775964    4656 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0816 10:25:00.783500    4656 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0816 10:25:00.797291    4656 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0816 10:25:00.889940    4656 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0816 10:25:00.996518    4656 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0816 10:25:00.996540    4656 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0816 10:25:01.010228    4656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:25:01.104164    4656 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0816 10:25:03.365849    4656 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.261743451s)
	I0816 10:25:03.365910    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0816 10:25:03.376096    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0816 10:25:03.386222    4656 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0816 10:25:03.479109    4656 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0816 10:25:03.594325    4656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:25:03.706928    4656 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0816 10:25:03.721224    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0816 10:25:03.732283    4656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:25:03.827894    4656 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0816 10:25:03.888066    4656 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0816 10:25:03.888145    4656 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0816 10:25:03.893520    4656 start.go:563] Will wait 60s for crictl version
	I0816 10:25:03.893575    4656 ssh_runner.go:195] Run: which crictl
	I0816 10:25:03.896917    4656 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 10:25:03.925631    4656 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0816 10:25:03.925712    4656 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0816 10:25:03.944598    4656 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0816 10:25:03.985082    4656 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0816 10:25:04.029274    4656 out.go:177]   - env NO_PROXY=192.169.0.5
	I0816 10:25:04.051107    4656 out.go:177]   - env NO_PROXY=192.169.0.5,192.169.0.6
	I0816 10:25:04.072084    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetIP
	I0816 10:25:04.072364    4656 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0816 10:25:04.075855    4656 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 10:25:04.085745    4656 mustload.go:65] Loading cluster: ha-286000
	I0816 10:25:04.085928    4656 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:25:04.086156    4656 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:25:04.086178    4656 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:25:04.095096    4656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52236
	I0816 10:25:04.095437    4656 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:25:04.095780    4656 main.go:141] libmachine: Using API Version  1
	I0816 10:25:04.095794    4656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:25:04.095992    4656 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:25:04.096098    4656 main.go:141] libmachine: (ha-286000) Calling .GetState
	I0816 10:25:04.096178    4656 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:25:04.096257    4656 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 4669
	I0816 10:25:04.097216    4656 host.go:66] Checking if "ha-286000" exists ...
	I0816 10:25:04.097478    4656 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:25:04.097503    4656 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:25:04.106283    4656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52238
	I0816 10:25:04.106623    4656 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:25:04.106944    4656 main.go:141] libmachine: Using API Version  1
	I0816 10:25:04.106954    4656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:25:04.107151    4656 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:25:04.107299    4656 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:25:04.107413    4656 certs.go:68] Setting up /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000 for IP: 192.169.0.7
	I0816 10:25:04.107420    4656 certs.go:194] generating shared ca certs ...
	I0816 10:25:04.107432    4656 certs.go:226] acquiring lock for ca certs: {Name:mka549fbc467849834d2267da204028c49d88a3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:25:04.107603    4656 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key
	I0816 10:25:04.107673    4656 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key
	I0816 10:25:04.107682    4656 certs.go:256] generating profile certs ...
	I0816 10:25:04.107801    4656 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.key
	I0816 10:25:04.107821    4656 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.fbb2b423
	I0816 10:25:04.107836    4656 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.fbb2b423 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.7 192.169.0.254]
	I0816 10:25:04.288936    4656 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.fbb2b423 ...
	I0816 10:25:04.288952    4656 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.fbb2b423: {Name:mk5b5d381df2e0229dfa97b94f9501ac61e1f4af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:25:04.289301    4656 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.fbb2b423 ...
	I0816 10:25:04.289309    4656 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.fbb2b423: {Name:mk1c231c3478673ccffbd14f4f0c5e31373f1228 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:25:04.289510    4656 certs.go:381] copying /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.fbb2b423 -> /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt
	I0816 10:25:04.289730    4656 certs.go:385] copying /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.fbb2b423 -> /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key
	I0816 10:25:04.289982    4656 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key
	I0816 10:25:04.289991    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0816 10:25:04.290020    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0816 10:25:04.290039    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0816 10:25:04.290058    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0816 10:25:04.290076    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0816 10:25:04.290101    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0816 10:25:04.290120    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0816 10:25:04.290144    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0816 10:25:04.290239    4656 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem (1338 bytes)
	W0816 10:25:04.290288    4656 certs.go:480] ignoring /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831_empty.pem, impossibly tiny 0 bytes
	I0816 10:25:04.290297    4656 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 10:25:04.290334    4656 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem (1078 bytes)
	I0816 10:25:04.290369    4656 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem (1123 bytes)
	I0816 10:25:04.290397    4656 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem (1679 bytes)
	I0816 10:25:04.290469    4656 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem (1708 bytes)
	I0816 10:25:04.290504    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem -> /usr/share/ca-certificates/1831.pem
	I0816 10:25:04.290530    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> /usr/share/ca-certificates/18312.pem
	I0816 10:25:04.290551    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:25:04.290581    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:25:04.290714    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:25:04.290801    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:25:04.290889    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:25:04.290979    4656 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:25:04.320175    4656 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0816 10:25:04.323948    4656 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0816 10:25:04.332572    4656 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0816 10:25:04.335881    4656 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0816 10:25:04.344208    4656 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0816 10:25:04.347261    4656 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0816 10:25:04.355353    4656 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0816 10:25:04.358754    4656 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0816 10:25:04.367226    4656 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0816 10:25:04.370644    4656 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0816 10:25:04.379014    4656 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0816 10:25:04.382464    4656 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0816 10:25:04.390940    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 10:25:04.411283    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0816 10:25:04.431206    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 10:25:04.451054    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 10:25:04.470415    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0816 10:25:04.490122    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 10:25:04.509717    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 10:25:04.529383    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 10:25:04.549154    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem --> /usr/share/ca-certificates/1831.pem (1338 bytes)
	I0816 10:25:04.568985    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem --> /usr/share/ca-certificates/18312.pem (1708 bytes)
	I0816 10:25:04.588519    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 10:25:04.607970    4656 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0816 10:25:04.621401    4656 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0816 10:25:04.635625    4656 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0816 10:25:04.649570    4656 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0816 10:25:04.663171    4656 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0816 10:25:04.676495    4656 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0816 10:25:04.690056    4656 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0816 10:25:04.703786    4656 ssh_runner.go:195] Run: openssl version
	I0816 10:25:04.707923    4656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1831.pem && ln -fs /usr/share/ca-certificates/1831.pem /etc/ssl/certs/1831.pem"
	I0816 10:25:04.716268    4656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1831.pem
	I0816 10:25:04.719659    4656 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 16:57 /usr/share/ca-certificates/1831.pem
	I0816 10:25:04.719702    4656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1831.pem
	I0816 10:25:04.723849    4656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1831.pem /etc/ssl/certs/51391683.0"
	I0816 10:25:04.732246    4656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18312.pem && ln -fs /usr/share/ca-certificates/18312.pem /etc/ssl/certs/18312.pem"
	I0816 10:25:04.740650    4656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18312.pem
	I0816 10:25:04.743948    4656 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 16:57 /usr/share/ca-certificates/18312.pem
	I0816 10:25:04.743983    4656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18312.pem
	I0816 10:25:04.748103    4656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18312.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 10:25:04.756745    4656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 10:25:04.765039    4656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:25:04.768354    4656 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 16:48 /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:25:04.768417    4656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:25:04.772556    4656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 10:25:04.781063    4656 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 10:25:04.784249    4656 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0816 10:25:04.784287    4656 kubeadm.go:934] updating node {m03 192.169.0.7 8443 v1.31.0 docker true true} ...
	I0816 10:25:04.784343    4656 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-286000-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-286000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 10:25:04.784359    4656 kube-vip.go:115] generating kube-vip config ...
	I0816 10:25:04.784396    4656 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0816 10:25:04.796986    4656 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0816 10:25:04.797028    4656 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0816 10:25:04.797080    4656 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 10:25:04.805783    4656 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0816 10:25:04.805828    4656 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0816 10:25:04.815857    4656 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0816 10:25:04.815860    4656 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256
	I0816 10:25:04.815857    4656 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256
	I0816 10:25:04.815875    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0816 10:25:04.815878    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0816 10:25:04.815911    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 10:25:04.815963    4656 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0816 10:25:04.815967    4656 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0816 10:25:04.819783    4656 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0816 10:25:04.819808    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0816 10:25:04.819886    4656 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0816 10:25:04.819905    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0816 10:25:04.838560    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0816 10:25:04.838690    4656 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0816 10:25:04.892677    4656 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0816 10:25:04.892722    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0816 10:25:05.452270    4656 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0816 10:25:05.460515    4656 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0816 10:25:05.473974    4656 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 10:25:05.487288    4656 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0816 10:25:05.501421    4656 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0816 10:25:05.504340    4656 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 10:25:05.514511    4656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:25:05.610695    4656 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 10:25:05.627113    4656 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:25:05.627365    4656 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:25:05.650018    4656 out.go:177] * Verifying Kubernetes components...
	I0816 10:25:05.671252    4656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:25:05.770878    4656 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 10:25:06.484588    4656 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19461-1276/kubeconfig
	I0816 10:25:06.484787    4656 kapi.go:59] client config for ha-286000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.key", CAFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x5540f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0816 10:25:06.484828    4656 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0816 10:25:06.484987    4656 node_ready.go:35] waiting up to 6m0s for node "ha-286000-m03" to be "Ready" ...
	I0816 10:25:06.485034    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:06.485039    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:06.485045    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:06.485048    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:06.487783    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:06.985311    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:06.985336    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:06.985348    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:06.985354    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:06.989349    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:07.485490    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:07.485513    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:07.485524    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:07.485529    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:07.489016    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:07.985178    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:07.985193    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:07.985199    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:07.985202    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:07.987679    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:08.487278    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:08.487300    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:08.487309    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:08.487315    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:08.491486    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:25:08.491567    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:08.987160    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:08.987184    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:08.987194    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:08.987200    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:08.990942    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:09.485053    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:09.485101    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:09.485109    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:09.485113    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:09.487562    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:09.985592    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:09.985671    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:09.985687    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:09.985696    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:09.989637    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:10.486025    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:10.486050    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:10.486061    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:10.486067    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:10.489557    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:10.985086    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:10.985127    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:10.985134    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:10.985139    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:10.987914    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:10.987975    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:11.485153    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:11.485176    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:11.485186    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:11.485193    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:11.488752    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:11.986139    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:11.986154    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:11.986162    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:11.986166    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:11.989386    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:12.485803    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:12.485849    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:12.485865    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:12.485870    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:12.489472    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:12.986570    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:12.986596    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:12.986607    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:12.986612    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:12.990236    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:12.990376    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:13.484914    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:13.484926    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:13.484932    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:13.484935    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:13.488977    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:25:13.986680    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:13.986696    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:13.986702    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:13.986705    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:13.989158    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:14.486321    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:14.486382    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:14.486402    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:14.486412    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:14.491203    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:25:14.985877    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:14.985901    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:14.985912    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:14.985949    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:14.989703    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:15.485277    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:15.485292    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:15.485299    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:15.485302    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:15.487768    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:15.487830    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:15.985642    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:15.985663    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:15.985675    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:15.985680    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:15.989433    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:16.484901    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:16.484927    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:16.484939    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:16.484944    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:16.488779    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:16.986034    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:16.986047    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:16.986054    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:16.986062    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:16.988709    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:17.486864    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:17.486887    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:17.486924    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:17.486931    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:17.490473    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:17.490551    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:17.985889    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:17.985909    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:17.985921    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:17.985925    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:17.989836    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:18.485398    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:18.485414    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:18.485421    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:18.485425    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:18.487889    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:18.985349    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:18.985378    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:18.985436    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:18.985442    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:18.988422    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:19.485081    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:19.485102    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:19.485113    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:19.485121    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:19.488852    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:19.985049    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:19.985062    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:19.985081    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:19.985085    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:19.987210    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:19.987270    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:20.484914    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:20.484939    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:20.484949    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:20.484954    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:20.488695    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:20.985203    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:20.985229    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:20.985239    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:20.985245    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:20.989283    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:25:21.484963    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:21.484979    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:21.484985    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:21.484989    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:21.487275    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:21.985755    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:21.985782    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:21.985793    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:21.985798    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:21.989914    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:25:21.989997    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:22.485717    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:22.485745    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:22.485824    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:22.485835    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:22.489667    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:22.985286    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:22.985301    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:22.985307    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:22.985318    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:22.987903    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:23.485546    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:23.485567    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:23.485578    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:23.485583    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:23.489380    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:23.985686    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:23.985757    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:23.985777    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:23.985792    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:23.989466    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:24.484557    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:24.484568    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:24.484575    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:24.484578    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:24.487089    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:24.487151    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:24.985579    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:24.985600    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:24.985609    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:24.985614    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:24.989536    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:25.485541    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:25.485564    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:25.485576    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:25.485583    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:25.489272    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:25.984513    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:25.984529    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:25.984536    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:25.984540    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:25.987095    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:26.486003    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:26.486022    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:26.486034    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:26.486043    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:26.489357    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:26.489445    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:26.985326    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:26.985345    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:26.985357    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:26.985363    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:26.988993    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:27.484603    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:27.484616    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:27.484621    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:27.484625    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:27.486943    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:27.984825    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:27.984844    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:27.984855    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:27.984861    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:27.988691    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:28.486230    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:28.486245    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:28.486253    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:28.486259    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:28.491735    4656 round_trippers.go:574] Response Status: 404 Not Found in 5 milliseconds
	I0816 10:25:28.491792    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:28.985268    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:28.985287    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:28.985315    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:28.985319    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:28.987718    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:29.485335    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:29.485355    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:29.485367    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:29.485372    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:29.488781    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:29.984712    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:29.984727    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:29.984736    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:29.984740    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:29.987128    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:30.484437    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:30.484448    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:30.484454    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:30.484457    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:30.487047    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:30.984627    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:30.984648    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:30.984659    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:30.984665    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:30.988084    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:30.988236    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:31.486364    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:31.486416    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:31.486431    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:31.486464    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:31.489760    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:31.985027    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:31.985041    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:31.985048    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:31.985052    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:31.987323    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:32.486368    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:32.486394    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:32.486407    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:32.486413    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:32.490571    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:25:32.984941    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:32.984966    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:32.984978    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:32.984984    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:32.988672    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:32.988757    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:33.484801    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:33.484813    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:33.484818    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:33.484823    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:33.487037    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:33.985797    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:33.985821    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:33.985834    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:33.985843    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:33.989368    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:34.484289    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:34.484304    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:34.484313    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:34.484318    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:34.486642    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:34.985159    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:34.985174    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:34.985181    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:34.985184    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:34.987765    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:35.484974    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:35.484995    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:35.485006    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:35.485012    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:35.488175    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:35.488288    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:35.984879    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:35.984901    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:35.984913    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:35.984918    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:35.988822    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:36.485651    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:36.485664    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:36.485671    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:36.485673    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:36.488116    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:36.985565    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:36.985584    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:36.985595    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:36.985601    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:36.989216    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:37.485779    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:37.485862    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:37.485877    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:37.485882    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:37.489350    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:37.489427    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:37.984128    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:37.984140    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:37.984146    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:37.984150    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:37.986646    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:38.485023    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:38.485039    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:38.485048    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:38.485052    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:38.487768    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:38.984183    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:38.984206    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:38.984261    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:38.984269    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:38.987325    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:39.485275    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:39.485321    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:39.485334    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:39.485338    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:39.487742    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:39.985699    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:39.985718    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:39.985729    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:39.985737    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:39.988773    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:39.988844    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:40.484531    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:40.484546    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:40.484554    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:40.484559    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:40.487018    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:40.985498    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:40.985513    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:40.985520    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:40.985524    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:40.987999    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:41.484329    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:41.484342    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:41.484347    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:41.484351    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:41.486849    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:41.984847    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:41.984871    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:41.984882    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:41.984889    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:41.988357    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:42.484908    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:42.484921    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:42.484927    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:42.484931    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:42.487626    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:42.487688    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:42.985273    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:42.985299    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:42.985311    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:42.985325    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:42.988684    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:43.485086    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:43.485111    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:43.485128    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:43.485134    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:43.488939    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:43.983910    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:43.983926    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:43.983933    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:43.983936    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:43.986292    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:44.484259    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:44.484279    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:44.484291    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:44.484328    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:44.486962    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:44.984437    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:44.984457    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:44.984467    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:44.984475    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:44.987835    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:44.987961    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:45.484938    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:45.484953    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:45.484961    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:45.484964    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:45.487461    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:45.985086    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:45.985109    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:45.985119    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:45.985124    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:45.988699    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:46.484276    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:46.484299    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:46.484311    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:46.484319    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:46.488509    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:25:46.983907    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:46.983920    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:46.983926    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:46.983929    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:46.986359    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:47.485117    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:47.485136    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:47.485145    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:47.485150    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:47.487992    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:47.488052    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:47.984816    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:47.984870    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:47.984882    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:47.984891    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:47.988129    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:48.483883    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:48.483900    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:48.483906    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:48.483911    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:48.486198    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:48.984169    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:48.984190    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:48.984203    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:48.984208    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:48.987942    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:49.484903    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:49.484919    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:49.484927    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:49.484933    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:49.487106    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:49.984353    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:49.984369    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:49.984375    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:49.984378    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:49.987041    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:49.987105    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:50.485525    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:50.485573    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:50.485599    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:50.485608    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:50.489590    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:50.983824    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:50.983847    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:50.983858    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:50.983864    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:50.987088    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:51.484527    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:51.484553    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:51.484560    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:51.484563    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:51.489758    4656 round_trippers.go:574] Response Status: 404 Not Found in 5 milliseconds
	I0816 10:25:51.984190    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:51.984202    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:51.984208    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:51.984212    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:51.986039    4656 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0816 10:25:52.484065    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:52.484112    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:52.484125    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:52.484132    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:52.487172    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:52.487316    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:52.984150    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:52.984166    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:52.984173    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:52.984175    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:52.986345    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:53.484269    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:53.484284    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:53.484293    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:53.484296    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:53.486726    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:53.985717    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:53.985742    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:53.985759    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:53.985765    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:53.989726    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:54.484319    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:54.484335    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:54.484342    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:54.484345    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:54.486811    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:54.984778    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:54.984800    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:54.984808    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:54.984812    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:54.987368    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:54.987445    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:55.484244    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:55.484267    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:55.484278    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:55.484286    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:55.488016    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:55.985068    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:55.985083    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:55.985090    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:55.985093    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:55.987495    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:56.484782    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:56.484807    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:56.484819    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:56.484826    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:56.488310    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:56.984397    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:56.984419    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:56.984431    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:56.984439    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:56.988216    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:56.988289    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:57.483589    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:57.483605    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:57.483611    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:57.483616    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:57.486165    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:57.985574    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:57.985599    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:57.985611    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:57.985616    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:57.989363    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:58.484270    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:58.484308    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:58.484320    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:58.484325    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:58.487918    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:58.983666    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:58.983682    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:58.983689    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:58.983697    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:58.985851    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:59.483521    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:59.483543    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:59.483554    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:59.483560    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:59.487399    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:59.487469    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:59.984232    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:59.984247    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:59.984255    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:59.984260    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:59.986963    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:00.483820    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:00.483833    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:00.483839    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:00.483842    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:00.486243    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:00.983904    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:00.983929    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:00.983941    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:00.983945    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:00.988101    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:26:01.484375    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:01.484399    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:01.484411    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:01.484448    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:01.488415    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:01.488502    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:01.983385    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:01.983401    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:01.983408    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:01.983411    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:01.985938    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:02.483425    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:02.483445    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:02.483457    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:02.483465    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:02.487166    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:02.984027    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:02.984094    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:02.984108    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:02.984117    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:02.987822    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:03.483320    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:03.483335    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:03.483341    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:03.483344    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:03.485639    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:03.985036    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:03.985059    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:03.985073    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:03.985077    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:03.988791    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:03.988858    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:04.483621    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:04.483639    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:04.483651    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:04.483658    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:04.487066    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:04.983859    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:04.983875    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:04.983882    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:04.983886    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:04.986493    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:05.483389    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:05.483408    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:05.483418    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:05.483422    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:05.486586    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:05.984366    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:05.984385    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:05.984397    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:05.984404    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:05.988161    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:06.483211    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:06.483226    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:06.483232    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:06.483235    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:06.485660    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:06.485720    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:06.983347    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:06.983366    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:06.983377    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:06.983386    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:06.986526    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:07.484090    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:07.484111    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:07.484123    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:07.484128    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:07.488198    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:26:07.983724    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:07.983740    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:07.983747    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:07.983750    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:07.986537    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:08.484146    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:08.484166    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:08.484178    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:08.484183    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:08.487983    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:08.488057    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:08.984192    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:08.984213    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:08.984224    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:08.984229    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:08.988294    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:26:09.484029    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:09.484043    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:09.484049    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:09.484052    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:09.486705    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:09.985246    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:09.985271    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:09.985283    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:09.985288    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:09.989175    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:10.483317    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:10.483343    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:10.483354    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:10.483360    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:10.486962    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:10.983808    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:10.983827    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:10.983852    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:10.983857    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:10.986240    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:10.986308    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:11.483336    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:11.483358    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:11.483369    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:11.483379    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:11.486931    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:11.984519    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:11.984661    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:11.984687    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:11.984697    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:11.988638    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:12.484861    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:12.484877    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:12.484886    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:12.484889    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:12.487390    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:12.983427    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:12.983451    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:12.983463    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:12.983469    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:12.986694    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:12.986771    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:13.484765    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:13.484792    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:13.484805    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:13.484811    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:13.488619    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:13.983338    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:13.983352    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:13.983393    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:13.983399    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:13.985734    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:14.483998    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:14.484020    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:14.484032    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:14.484040    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:14.487538    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:14.984976    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:14.985003    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:14.985019    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:14.985025    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:14.988674    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:14.988745    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:15.483186    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:15.483201    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:15.483208    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:15.483212    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:15.485667    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:15.983775    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:15.983787    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:15.983794    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:15.983798    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:15.986102    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:16.483426    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:16.483449    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:16.483465    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:16.483473    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:16.487194    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:16.983030    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:16.983043    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:16.983049    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:16.983053    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:16.986507    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:17.484904    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:17.484932    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:17.484944    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:17.484951    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:17.488809    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:17.488909    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:17.983661    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:17.983682    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:17.983691    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:17.983700    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:17.987560    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:18.483005    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:18.483019    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:18.483043    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:18.483047    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:18.485247    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:18.982837    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:18.982858    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:18.982870    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:18.982877    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:18.986275    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:19.484274    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:19.484305    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:19.484343    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:19.484351    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:19.488293    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:19.983892    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:19.983907    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:19.983913    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:19.983917    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:19.986273    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:19.986330    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:20.483798    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:20.483825    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:20.483837    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:20.483843    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:20.487687    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:20.983298    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:20.983317    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:20.983329    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:20.983341    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:20.986753    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:21.483677    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:21.483697    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:21.483720    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:21.483722    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:21.486177    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:21.983903    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:21.983922    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:21.983934    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:21.983940    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:21.986911    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:21.986973    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:22.484112    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:22.484134    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:22.484147    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:22.484152    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:22.488262    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:26:22.983975    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:22.984028    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:22.984035    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:22.984039    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:22.986443    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:23.483009    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:23.483033    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:23.483050    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:23.483066    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:23.486675    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:23.983451    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:23.983483    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:23.983500    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:23.983511    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:23.987001    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:23.987063    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:24.483488    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:24.483536    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:24.483547    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:24.483551    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:24.485853    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:24.982731    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:24.982743    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:24.982750    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:24.982753    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:24.984789    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:25.483610    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:25.483630    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:25.483639    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:25.483645    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:25.487060    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:25.982597    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:25.982610    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:25.982622    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:25.982626    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:25.994285    4656 round_trippers.go:574] Response Status: 404 Not Found in 11 milliseconds
	I0816 10:26:25.994342    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:26.483108    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:26.483129    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:26.483141    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:26.483147    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:26.486703    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:26.984543    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:26.984561    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:26.984570    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:26.984574    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:26.987295    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:27.484057    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:27.484070    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:27.484076    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:27.484079    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:27.486438    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:27.982568    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:27.982579    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:27.982586    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:27.982589    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:27.984714    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:28.482928    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:28.482954    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:28.482966    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:28.482971    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:28.486982    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:28.487049    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:28.983984    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:28.984000    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:28.984007    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:28.984010    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:28.986187    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:29.482503    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:29.482527    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:29.482539    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:29.482545    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:29.485679    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:29.982668    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:29.982688    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:29.982700    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:29.982707    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:29.986106    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:30.483014    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:30.483035    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:30.483044    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:30.483048    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:30.485517    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:30.984509    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:30.984533    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:30.984544    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:30.984596    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:30.988289    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:30.988408    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:31.483916    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:31.483943    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:31.483981    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:31.483990    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:31.487890    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:31.982923    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:31.982943    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:31.982952    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:31.982956    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:31.985708    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:32.483569    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:32.483593    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:32.483605    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:32.483616    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:32.487327    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:32.982635    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:32.982661    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:32.982673    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:32.982679    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:32.986374    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:33.482846    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:33.482858    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:33.482872    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:33.482882    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:33.485277    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:33.485339    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:33.982793    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:33.982819    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:33.982831    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:33.982836    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:33.986153    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:34.482560    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:34.482578    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:34.482604    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:34.482610    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:34.486015    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:34.982428    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:34.982450    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:34.982463    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:34.982469    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:34.985873    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:35.483727    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:35.483740    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:35.483747    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:35.483751    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:35.485833    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:35.485894    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:35.982916    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:35.982943    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:35.982955    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:35.982965    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:35.986742    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:36.483103    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:36.483123    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:36.483132    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:36.483135    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:36.485868    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:36.982704    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:36.982762    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:36.982776    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:36.982790    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:36.986222    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:37.483468    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:37.483488    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:37.483500    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:37.483506    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:37.487244    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:37.487314    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:37.983372    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:37.983388    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:37.983394    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:37.983397    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:37.985922    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:38.483160    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:38.483179    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:38.483191    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:38.483199    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:38.486492    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:38.982468    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:38.982483    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:38.982489    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:38.982493    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:38.984866    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:39.482442    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:39.482495    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:39.482503    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:39.482507    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:39.484936    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:39.982412    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:39.982432    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:39.982441    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:39.982450    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:39.986230    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:39.986305    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:40.483055    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:40.483077    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:40.483087    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:40.483096    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:40.486444    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:40.983022    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:40.983056    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:40.983064    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:40.983068    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:40.985224    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:41.482184    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:41.482204    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:41.482215    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:41.482220    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:41.485468    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:41.983203    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:41.983227    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:41.983306    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:41.983320    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:41.987091    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:41.987171    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:42.483067    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:42.483083    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:42.483092    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:42.483096    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:42.485854    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:42.982325    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:42.982346    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:42.982358    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:42.982367    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:42.985247    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:43.482212    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:43.482232    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:43.482245    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:43.482253    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:43.485500    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:43.982210    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:43.982226    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:43.982232    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:43.982235    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:43.984789    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:44.483719    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:44.483739    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:44.483750    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:44.483758    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:44.487463    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:44.487539    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:44.984070    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:44.984094    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:44.984106    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:44.984112    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:44.987930    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:45.483159    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:45.483174    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:45.483183    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:45.483188    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:45.485689    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:45.982348    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:45.982376    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:45.982441    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:45.982451    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:45.986431    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:46.483035    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:46.483061    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:46.483073    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:46.483079    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:46.487152    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:26:46.982639    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:46.982696    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:46.982710    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:46.982717    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:46.986259    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:46.986315    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:47.482155    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:47.482188    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:47.482237    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:47.482249    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:47.485627    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:47.983982    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:47.984007    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:47.984020    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:47.984026    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:47.988122    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:26:48.482121    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:48.482168    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:48.482175    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:48.482179    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:48.484595    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:48.983532    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:48.983556    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:48.983569    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:48.983574    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:48.987409    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:48.987484    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:49.483718    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:49.483736    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:49.483748    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:49.483754    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:49.487115    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:49.982660    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:49.982682    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:49.982692    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:49.982696    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:49.985469    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:50.481995    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:50.482014    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:50.482032    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:50.482058    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:50.485582    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:50.981809    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:50.981828    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:50.981835    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:50.981839    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:50.984238    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:51.482206    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:51.482226    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:51.482236    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:51.482241    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:51.485102    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:51.485201    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:51.983488    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:51.983503    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:51.983512    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:51.983516    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:51.986249    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:52.482268    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:52.482293    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:52.482304    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:52.482311    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:52.485931    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:52.983543    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:52.983556    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:52.983562    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:52.983564    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:52.987568    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:53.482529    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:53.482553    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:53.482590    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:53.482612    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:53.486396    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:53.486481    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:53.983382    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:53.983409    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:53.983421    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:53.983426    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:53.987647    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:26:54.482288    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:54.482367    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:54.482378    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:54.482383    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:54.484925    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:54.983458    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:54.983478    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:54.983490    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:54.983497    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:54.987016    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:55.482017    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:55.482037    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:55.482048    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:55.482054    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:55.485201    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:55.983339    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:55.983353    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:55.983360    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:55.983377    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:55.985849    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:55.985910    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:56.483753    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:56.483779    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:56.483792    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:56.483798    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:56.487683    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:56.983682    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:56.983735    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:56.983749    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:56.983758    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:56.987724    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:57.481708    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:57.481724    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:57.481730    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:57.481733    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:57.483972    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:57.983723    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:57.983751    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:57.983772    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:57.983782    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:57.987662    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:57.987781    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:58.481946    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:58.481978    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:58.481989    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:58.481998    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:58.485616    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:58.982478    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:58.982494    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:58.982501    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:58.982503    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:58.984797    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:59.482635    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:59.482661    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:59.482672    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:59.482678    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:59.486199    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:59.983080    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:59.983108    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:59.983179    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:59.983189    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:59.986765    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:00.481883    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:00.481904    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:00.481916    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:00.481923    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:00.485164    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:00.485241    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:00.983581    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:00.983606    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:00.983618    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:00.983626    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:00.987095    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:01.481499    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:01.481518    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:01.481530    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:01.481536    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:01.484541    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:01.981949    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:01.981971    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:01.981980    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:01.981985    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:01.984730    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:02.483014    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:02.483039    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:02.483050    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:02.483057    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:02.486856    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:02.486952    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:02.982039    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:02.982061    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:02.982075    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:02.982083    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:02.986009    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:03.482044    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:03.482058    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:03.482064    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:03.482068    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:03.484293    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:03.982493    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:03.982521    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:03.982589    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:03.982599    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:03.986547    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:04.481423    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:04.481443    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:04.481481    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:04.481492    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:04.484534    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:04.981631    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:04.981650    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:04.981659    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:04.981665    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:04.984478    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:04.984535    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:05.481850    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:05.481876    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:05.481888    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:05.481895    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:05.485885    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:05.983485    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:05.983508    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:05.983520    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:05.983529    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:05.987747    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:27:06.481638    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:06.481654    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:06.481660    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:06.481666    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:06.483910    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:06.982417    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:06.982443    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:06.982456    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:06.982461    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:06.986711    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:27:06.986836    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:07.482901    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:07.482925    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:07.482937    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:07.482944    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:07.486790    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:07.981354    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:07.981370    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:07.981376    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:07.981380    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:07.984233    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:08.482884    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:08.482907    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:08.482918    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:08.482923    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:08.486675    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:08.983285    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:08.983308    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:08.983320    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:08.983362    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:08.987075    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:08.987178    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:09.481582    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:09.481596    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:09.481602    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:09.481615    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:09.484345    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:09.982946    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:09.982968    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:09.982980    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:09.982987    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:09.987241    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:27:10.482214    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:10.482233    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:10.482245    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:10.482250    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:10.485342    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:10.981598    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:10.981613    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:10.981647    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:10.981651    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:10.983798    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:11.481915    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:11.481938    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:11.481949    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:11.481956    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:11.485887    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:11.485960    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:11.982040    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:11.982065    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:11.982077    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:11.982085    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:11.985843    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:12.481119    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:12.481134    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:12.481140    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:12.481144    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:12.483753    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:12.983314    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:12.983335    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:12.983348    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:12.983354    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:12.987658    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:27:13.483200    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:13.483225    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:13.483237    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:13.483242    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:13.487000    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:13.487075    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:13.981082    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:13.981098    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:13.981104    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:13.981107    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:13.983666    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:14.481510    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:14.481533    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:14.481546    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:14.481553    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:14.485493    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:14.982587    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:14.982611    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:14.982623    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:14.982632    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:14.986953    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:27:15.481989    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:15.482002    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:15.482008    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:15.482011    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:15.484306    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:15.983142    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:15.983197    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:15.983212    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:15.983220    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:15.987145    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:15.987217    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:16.482640    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:16.482663    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:16.482676    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:16.482682    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:16.486588    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:16.982739    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:16.982758    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:16.982767    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:16.982771    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:16.985870    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:17.482222    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:17.482247    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:17.482259    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:17.482264    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:17.486553    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:27:17.982295    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:17.982319    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:17.982345    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:17.982355    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:17.986295    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:18.481466    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:18.481480    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:18.481501    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:18.481505    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:18.484182    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:18.484250    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:18.981829    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:18.981869    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:18.981879    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:18.981887    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:18.984310    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:19.481304    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:19.481354    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:19.481368    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:19.481374    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:19.485047    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:19.981003    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:19.981016    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:19.981022    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:19.981026    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:19.983258    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:20.482082    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:20.482099    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:20.482107    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:20.482110    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:20.484774    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:20.484831    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:20.982149    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:20.982161    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:20.982167    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:20.982171    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:20.984491    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:21.482759    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:21.482774    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:21.482784    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:21.482805    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:21.488307    4656 round_trippers.go:574] Response Status: 404 Not Found in 5 milliseconds
	I0816 10:27:21.980923    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:21.980944    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:21.980956    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:21.980962    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:21.985236    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:27:22.480954    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:22.480982    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:22.481000    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:22.481007    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:22.484623    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:22.982155    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:22.982170    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:22.982177    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:22.982183    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:22.985131    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:22.985233    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:23.481447    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:23.481473    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:23.481485    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:23.481493    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:23.485171    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:23.980807    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:23.980841    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:23.980854    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:23.980886    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:23.984726    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:24.481009    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:24.481023    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:24.481030    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:24.481033    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:24.483629    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:24.981780    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:24.981800    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:24.981812    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:24.981817    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:24.985032    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:25.482336    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:25.482370    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:25.482430    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:25.482437    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:25.486196    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:25.486271    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:25.981022    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:25.981035    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:25.981041    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:25.981048    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:25.983833    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:26.481578    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:26.481603    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:26.481614    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:26.481620    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:26.485938    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:27:26.981068    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:26.981108    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:26.981117    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:26.981122    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:26.983762    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:27.481705    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:27.481739    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:27.481747    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:27.481751    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:27.484193    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:27.981754    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:27.981779    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:27.981791    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:27.981804    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:27.985583    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:27.985651    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:28.481144    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:28.481173    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:28.481209    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:28.481216    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:28.484725    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:28.981756    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:28.981769    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:28.981776    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:28.981779    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:28.984303    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:29.481471    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:29.481547    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:29.481562    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:29.481571    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:29.484980    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:29.981350    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:29.981376    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:29.981388    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:29.981394    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:29.985134    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:30.481784    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:30.481800    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:30.481807    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:30.481810    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:30.483987    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:30.484040    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:30.981042    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:30.981064    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:30.981075    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:30.981082    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:30.985035    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:31.480553    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:31.480568    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:31.480576    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:31.480580    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:31.483746    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:31.981346    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:31.981362    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:31.981368    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:31.981372    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:31.983579    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:32.481011    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:32.481036    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:32.481048    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:32.481054    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:32.484005    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:32.484066    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:32.980838    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:32.980858    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:32.980869    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:32.980876    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:32.984769    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:33.481797    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:33.481813    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:33.481819    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:33.481822    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:33.484075    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:33.980538    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:33.980569    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:33.980581    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:33.980586    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:33.984292    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:34.480611    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:34.480633    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:34.480644    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:34.480650    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:34.484424    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:34.484495    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:34.980662    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:34.980675    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:34.980685    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:34.980688    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:34.983333    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:35.481072    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:35.481093    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:35.481104    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:35.481109    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:35.484858    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:35.980573    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:35.980600    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:35.980613    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:35.980619    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:35.984318    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:36.481723    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:36.481742    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:36.481750    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:36.481755    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:36.484525    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:36.484582    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:36.981468    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:36.981491    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:36.981534    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:36.981541    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:36.985480    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:37.481087    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:37.481115    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:37.481127    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:37.481133    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:37.484349    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:37.981606    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:37.981618    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:37.981624    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:37.981628    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:37.984174    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:38.480919    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:38.480942    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:38.480954    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:38.480960    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:38.484462    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:38.484530    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:38.980883    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:38.980958    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:38.980971    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:38.980976    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:38.985426    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:27:39.480691    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:39.480705    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:39.480711    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:39.480714    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:39.483370    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:39.980523    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:39.980543    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:39.980554    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:39.980559    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:39.983705    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:40.480857    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:40.480870    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:40.480876    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:40.480880    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:40.483015    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:40.980527    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:40.980547    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:40.980559    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:40.980566    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:40.984425    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:40.984557    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:41.480215    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:41.480250    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:41.480259    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:41.480264    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:41.482681    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:41.980221    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:41.980233    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:41.980238    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:41.980241    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:41.983101    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:42.481763    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:42.481782    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:42.481794    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:42.481801    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:42.484939    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:42.981092    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:42.981114    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:42.981125    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:42.981131    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:42.985191    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:27:42.985282    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:43.481456    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:43.481481    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:43.481493    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:43.481498    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:43.485020    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:43.981686    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:43.981734    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:43.981742    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:43.981745    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:43.984138    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:44.480895    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:44.480921    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:44.480934    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:44.480940    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:44.484864    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:44.980350    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:44.980376    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:44.980388    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:44.980394    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:44.984559    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:27:45.480493    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:45.480509    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:45.480518    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:45.480523    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:45.483088    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:45.483193    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:45.981740    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:45.981766    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:45.981778    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:45.981787    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:45.985812    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:27:46.480744    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:46.480771    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:46.480782    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:46.480788    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:46.484433    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:46.980028    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:46.980044    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:46.980052    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:46.980058    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:46.982468    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:47.480811    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:47.480834    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:47.480846    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:47.480854    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:47.484154    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:47.484225    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:47.981495    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:47.981558    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:47.981573    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:47.981579    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:47.984852    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:48.481331    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:48.481350    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:48.481357    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:48.481360    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:48.483672    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:48.981308    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:48.981334    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:48.981345    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:48.981351    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:48.987316    4656 round_trippers.go:574] Response Status: 404 Not Found in 5 milliseconds
	I0816 10:27:49.480610    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:49.480631    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:49.480642    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:49.480650    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:49.484493    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:49.484576    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:49.980270    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:49.980291    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:49.980303    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:49.980311    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:49.983514    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:50.480630    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:50.480652    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:50.480663    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:50.480672    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:50.484716    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:27:50.980998    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:50.981031    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:50.981079    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:50.981089    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:50.984717    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:51.481764    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:51.481781    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:51.481788    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:51.481792    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:51.483882    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:51.981147    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:51.981167    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:51.981178    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:51.981185    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:51.984837    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:51.984916    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:52.480088    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:52.480109    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:52.480121    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:52.480126    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:52.483987    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:52.980987    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:52.981013    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:52.981029    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:52.981059    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:52.984581    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:53.480043    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:53.480063    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:53.480084    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:53.480092    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:53.483664    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:53.980634    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:53.980693    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:53.980706    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:53.980711    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:53.984482    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:54.480029    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:54.480042    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:54.480051    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:54.480056    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:54.482803    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:54.482872    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:54.980002    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:54.980026    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:54.980038    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:54.980043    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:54.983690    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:55.480147    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:55.480213    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:55.480241    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:55.480251    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:55.484002    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:55.980804    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:55.980819    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:55.980825    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:55.980828    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:55.982902    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:56.480975    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:56.480997    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:56.481006    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:56.481011    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:56.484989    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:56.485061    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:56.980849    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:56.980870    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:56.980880    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:56.980888    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:56.984648    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:57.479708    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:57.479723    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:57.479732    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:57.479736    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:57.482298    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:57.979711    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:57.979729    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:57.979741    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:57.979746    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:57.983031    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:58.481734    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:58.481790    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:58.481805    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:58.481814    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:58.486010    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:27:58.486113    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:58.980860    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:58.980917    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:58.980929    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:58.980937    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:58.984281    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:59.480008    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:59.480075    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:59.480090    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:59.480100    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:59.483377    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:59.981599    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:59.981621    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:59.981633    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:59.981639    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:59.985606    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:00.480770    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:00.480786    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:00.480795    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:00.480798    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:00.483310    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:28:00.980781    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:00.980807    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:00.980817    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:00.980824    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:00.984773    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:00.984872    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:01.480191    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:01.480210    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:01.480218    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:01.480222    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:01.482706    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:28:01.979918    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:01.979940    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:01.979950    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:01.979955    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:01.982361    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:28:02.481286    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:02.481302    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:02.481308    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:02.481311    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:02.483655    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:28:02.980572    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:02.980632    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:02.980646    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:02.980655    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:02.984337    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:03.479541    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:03.479553    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:03.479560    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:03.479562    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:03.482043    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:28:03.482109    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:03.980816    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:03.980840    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:03.980877    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:03.980906    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:03.984861    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:04.481240    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:04.481266    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:04.481276    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:04.481282    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:04.485558    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:28:04.981353    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:04.981413    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:04.981429    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:04.981438    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:04.984812    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:05.480489    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:05.480511    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:05.480523    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:05.480528    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:05.484058    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:05.484144    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:05.979456    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:05.979471    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:05.979480    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:05.979485    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:05.981941    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:28:06.480803    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:06.480823    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:06.480834    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:06.480841    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:06.483869    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:06.980347    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:06.980368    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:06.980379    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:06.980384    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:06.983544    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:07.479393    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:07.479421    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:07.479481    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:07.479491    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:07.483249    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:07.979964    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:07.979979    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:07.979985    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:07.979988    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:07.983187    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:07.983251    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:08.479456    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:08.479474    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:08.479486    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:08.479493    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:08.483132    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:08.980053    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:08.980073    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:08.980083    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:08.980090    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:08.983933    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:09.481215    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:09.481229    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:09.481237    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:09.481242    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:09.483856    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:28:09.980082    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:09.980109    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:09.980121    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:09.980129    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:09.983657    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:09.983727    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:10.481137    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:10.481162    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:10.481171    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:10.481178    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:10.485023    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:10.979382    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:10.979406    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:10.979418    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:10.979425    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:10.982616    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:11.480878    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:11.480900    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:11.480924    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:11.480931    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:11.484400    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:11.980148    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:11.980201    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:11.980213    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:11.980220    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:11.983261    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:12.479546    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:12.479558    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:12.479564    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:12.479568    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:12.482006    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:28:12.482066    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:12.980407    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:12.980433    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:12.980446    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:12.980455    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:12.984259    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:13.481285    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:13.481304    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:13.481316    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:13.481321    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:13.484864    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:13.980948    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:13.980967    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:13.981024    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:13.981032    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:13.983792    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:28:14.480529    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:14.480592    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:14.480607    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:14.480615    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:14.485369    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:28:14.485425    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:14.980508    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:14.980528    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:14.980540    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:14.980546    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:14.984308    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:15.479351    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:15.479366    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:15.479375    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:15.479378    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:15.482333    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:28:15.979273    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:15.979296    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:15.979308    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:15.979317    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:15.983036    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:16.480267    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:16.480288    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:16.480300    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:16.480306    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:16.484104    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:16.979260    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:16.979282    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:16.979294    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:16.979302    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:16.983145    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:16.983218    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:17.479986    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:17.480012    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:17.480023    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:17.480031    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:17.483621    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:17.980230    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:17.980255    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:17.980267    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:17.980273    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:17.983388    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:18.479428    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:18.479444    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:18.479452    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:18.479457    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:18.482401    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:28:18.980054    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:18.980078    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:18.980090    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:18.980111    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:18.984291    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:28:18.984384    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:19.479204    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:19.479223    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:19.479235    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:19.479241    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:19.482609    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:19.980334    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:19.980358    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:19.980370    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:19.980376    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:19.984055    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:20.479678    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:20.479704    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:20.479716    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:20.479722    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:20.483940    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:28:20.980207    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:20.980232    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:20.980243    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:20.980248    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:20.984073    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:21.479009    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:21.479028    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:21.479039    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:21.479045    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:21.482870    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:21.482946    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:21.979028    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:21.979048    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:21.979060    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:21.979067    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:21.982782    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:22.480202    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:22.480229    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:22.480242    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:22.480248    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:22.484332    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:28:22.979809    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:22.979829    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:22.979861    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:22.979867    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:22.982210    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:28:23.480520    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:23.480541    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:23.480556    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:23.480564    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:23.484344    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:23.484415    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:23.978872    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:23.978890    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:23.978939    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:23.978947    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:23.981588    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:28:24.478940    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:24.479024    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:24.479038    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:24.479046    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:24.482719    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:24.980016    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:24.980040    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:24.980053    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:24.980061    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:24.984315    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:28:25.478940    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:25.478960    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:25.478971    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:25.478978    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:25.483052    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:28:25.979269    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:25.979296    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:25.979308    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:25.979314    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:25.983114    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:25.983257    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:26.479781    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:26.479806    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:26.479817    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:26.479828    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:26.483419    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:26.979605    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:26.979626    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:26.979637    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:26.979644    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:26.982753    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:27.479413    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:27.479438    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:27.479450    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:27.479458    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:27.483110    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:27.980825    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:27.980852    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:27.980863    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:27.980870    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:27.984767    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:27.984839    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:28.479839    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:28.479867    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:28.479880    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:28.479888    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:28.483764    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:28.978775    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:28.978797    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:28.978808    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:28.978815    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:28.982911    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:28:29.480812    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:29.480838    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:29.480848    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:29.480854    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:29.484272    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:29.980179    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:29.980196    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:29.980204    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:29.980208    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:29.983010    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:28:30.479018    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:30.479037    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:30.479056    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:30.479060    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:30.480976    4656 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0816 10:28:30.481040    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:30.979780    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:30.979800    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:30.979810    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:30.979815    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:30.983686    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:31.479047    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:31.479069    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:31.479081    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:31.479088    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:31.482916    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:31.979327    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:31.979383    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:31.979396    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:31.979406    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:31.982781    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:32.479680    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:32.479701    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:32.479712    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:32.479718    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:32.483452    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:32.483551    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:32.979627    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:32.979653    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:32.979665    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:32.979672    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:32.983502    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:33.479195    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:33.479213    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:33.479223    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:33.479231    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:33.482627    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:33.978591    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:33.978614    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:33.978669    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:33.978677    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:33.982499    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:34.478777    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:34.478796    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:34.478805    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:34.478810    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:34.481463    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:28:34.979814    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:34.979835    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:34.979847    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:34.979856    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:34.984020    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:28:34.984095    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:35.478731    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:35.478759    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:35.478769    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:35.478775    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:35.482596    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:35.979086    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:35.979114    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:35.979127    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:35.979133    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:35.982826    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:36.478524    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:36.478548    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:36.478560    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:36.478568    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:36.482514    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:36.978759    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:36.978778    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:36.978789    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:36.978795    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:36.982532    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:37.478813    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:37.478836    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:37.478848    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:37.478854    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:37.482815    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:37.483027    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:37.980493    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:37.980519    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:37.980530    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:37.980535    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:37.984193    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:38.479572    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:38.479594    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:38.479607    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:38.479615    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:38.483949    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:28:38.980347    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:38.980372    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:38.980383    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:38.980388    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:38.984077    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:39.480084    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:39.480110    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:39.480121    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:39.480127    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:39.483858    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:39.483927    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:39.978886    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:39.978908    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:39.978920    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:39.978927    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:39.982482    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:40.478804    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:40.478830    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:40.478841    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:40.478847    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:40.482793    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:40.979356    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:40.979380    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:40.979392    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:40.979401    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:40.983583    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:28:41.479873    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:41.479894    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:41.479913    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:41.479918    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:41.483490    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:41.978368    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:41.978382    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:41.978389    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:41.978393    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:41.984198    4656 round_trippers.go:574] Response Status: 404 Not Found in 5 milliseconds
	I0816 10:28:41.984261    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:42.478642    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:42.478662    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:42.478675    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:42.478681    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:42.482721    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:28:42.979333    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:42.979358    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:42.979370    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:42.979376    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:42.983591    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:28:43.478780    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:43.478803    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:43.478816    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:43.478824    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:43.482771    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:43.978807    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:43.978858    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:43.978871    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:43.978878    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:43.982183    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:44.479103    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:44.479131    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:44.479208    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:44.479217    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:44.483010    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:44.483102    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:44.980168    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:44.980193    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:44.980205    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:44.980212    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:44.984284    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:28:45.478814    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:45.478840    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:45.478851    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:45.478856    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:45.482566    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:45.978463    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:45.978490    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:45.978503    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:45.978509    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:45.982104    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:46.478332    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:46.478358    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:46.478370    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:46.478376    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:46.482196    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:46.980202    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:46.980226    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:46.980235    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:46.980242    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:46.984038    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:46.984113    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:47.480191    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:47.480236    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:47.480249    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:47.480256    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:47.483962    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:47.978487    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:47.978512    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:47.978524    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:47.978529    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:47.982450    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:48.478150    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:48.478167    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:48.478183    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:48.478192    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:48.481632    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:48.978324    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:48.978347    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:48.978359    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:48.978366    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:48.982094    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:49.479467    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:49.479488    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:49.479500    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:49.479508    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:49.483304    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:49.483387    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:49.979540    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:49.979559    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:49.979567    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:49.979571    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:49.982173    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:28:50.478844    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:50.478865    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:50.478876    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:50.478882    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:50.482687    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:50.979032    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:50.979057    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:50.979069    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:50.979075    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:50.982937    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:51.477969    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:51.477985    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:51.477992    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:51.477996    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:51.480844    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:28:51.978499    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:51.978525    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:51.978594    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:51.978604    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:51.982296    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:51.982369    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:52.478660    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:52.478681    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:52.478693    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:52.478700    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:52.482493    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:52.979157    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:52.979218    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:52.979232    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:52.979243    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:52.982949    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:53.477935    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:53.477952    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:53.477964    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:53.477971    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:53.481445    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:53.979399    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:53.979426    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:53.979437    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:53.979442    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:53.983298    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:53.983373    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:54.477959    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:54.477983    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:54.477992    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:54.478000    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:54.480818    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:28:54.977914    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:54.977928    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:54.977937    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:54.977943    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:54.980985    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:55.477939    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:55.477959    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:55.477971    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:55.477980    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:55.481823    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:55.978706    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:55.978725    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:55.978734    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:55.978740    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:55.981215    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:28:56.478017    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:56.478041    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:56.478055    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:56.478066    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:56.481827    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:56.481901    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:56.979955    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:56.979976    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:56.979987    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:56.979994    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:56.984295    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:28:57.478039    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:57.478057    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:57.478067    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:57.478073    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:57.481105    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:57.978248    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:57.978270    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:57.978283    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:57.978291    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:57.982239    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:58.477943    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:58.477971    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:58.477987    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:58.478001    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:58.481727    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:58.978661    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:58.978678    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:58.978687    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:58.978693    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:58.981579    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:28:58.981644    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:59.479830    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:59.479861    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:59.479927    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:59.479949    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:59.483371    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:59.977787    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:59.977804    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:59.977810    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:59.977813    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:59.979974    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:29:00.478024    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:29:00.478039    4656 round_trippers.go:469] Request Headers:
	I0816 10:29:00.478047    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:29:00.478051    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:29:00.480707    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:29:00.979674    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:29:00.979700    4656 round_trippers.go:469] Request Headers:
	I0816 10:29:00.979712    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:29:00.979718    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:29:00.983620    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:29:00.983742    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:29:01.478022    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:29:01.478042    4656 round_trippers.go:469] Request Headers:
	I0816 10:29:01.478053    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:29:01.478060    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:29:01.481326    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:29:01.978405    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:29:01.978425    4656 round_trippers.go:469] Request Headers:
	I0816 10:29:01.978434    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:29:01.978438    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:29:01.981188    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:29:02.479658    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:29:02.479772    4656 round_trippers.go:469] Request Headers:
	I0816 10:29:02.479790    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:29:02.479798    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:29:02.483872    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:29:02.979772    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:29:02.979794    4656 round_trippers.go:469] Request Headers:
	I0816 10:29:02.979807    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:29:02.979815    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:29:02.983496    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:29:03.477789    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:29:03.477808    4656 round_trippers.go:469] Request Headers:
	I0816 10:29:03.477817    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:29:03.477821    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:29:03.480617    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:29:03.480674    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:29:03.977650    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:29:03.977672    4656 round_trippers.go:469] Request Headers:
	I0816 10:29:03.977683    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:29:03.977689    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:29:03.981168    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:29:04.479691    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:29:04.479717    4656 round_trippers.go:469] Request Headers:
	I0816 10:29:04.479729    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:29:04.479737    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:29:04.483384    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:29:04.978063    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:29:04.978077    4656 round_trippers.go:469] Request Headers:
	I0816 10:29:04.978086    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:29:04.978091    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:29:04.980657    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:29:05.479407    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:29:05.479427    4656 round_trippers.go:469] Request Headers:
	I0816 10:29:05.479438    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:29:05.479443    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:29:05.482914    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:29:05.483084    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:29:05.979238    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:29:05.979260    4656 round_trippers.go:469] Request Headers:
	I0816 10:29:05.979272    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:29:05.979280    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:29:05.982997    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:29:06.478226    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:29:06.478251    4656 round_trippers.go:469] Request Headers:
	I0816 10:29:06.478264    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:29:06.478270    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:29:06.482103    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:29:06.482169    4656 node_ready.go:38] duration metric: took 4m0.00480463s for node "ha-286000-m03" to be "Ready" ...
	I0816 10:29:06.503388    4656 out.go:201] 
	W0816 10:29:06.524396    4656 out.go:270] X Exiting due to GUEST_START: failed to start node: adding node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: adding node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	W0816 10:29:06.524419    4656 out.go:270] * 
	* 
	W0816 10:29:06.525619    4656 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 10:29:06.587617    4656 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-amd64 node list -p ha-286000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-amd64 node list -p ha-286000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-286000 -n ha-286000
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-286000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-286000 logs -n 25: (3.558886363s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| kubectl | -p ha-286000 -- get pods -o          | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- get pods -o          | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- get pods -o          | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT |                     |
	|         | busybox-7dff88458-99xmp --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-dvmvk --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-k9m92 --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT |                     |
	|         | busybox-7dff88458-99xmp --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-dvmvk --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-k9m92 --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT |                     |
	|         | busybox-7dff88458-99xmp -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-dvmvk -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-k9m92 -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- get pods -o          | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT |                     |
	|         | busybox-7dff88458-99xmp              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-dvmvk              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-dvmvk -- sh        |           |         |         |                     |                     |
	|         | -c ping -c 1 192.169.0.1             |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-k9m92              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-k9m92 -- sh        |           |         |         |                     |                     |
	|         | -c ping -c 1 192.169.0.1             |           |         |         |                     |                     |
	| node    | add -p ha-286000 -v=7                | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:17 PDT | 16 Aug 24 10:17 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | ha-286000 node stop m02 -v=7         | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:17 PDT | 16 Aug 24 10:18 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | ha-286000 node start m02 -v=7        | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:20 PDT | 16 Aug 24 10:22 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | list -p ha-286000 -v=7               | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:22 PDT |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| stop    | -p ha-286000 -v=7                    | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:22 PDT | 16 Aug 24 10:23 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| start   | -p ha-286000 --wait=true -v=7        | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:23 PDT |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | list -p ha-286000                    | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:29 PDT |                     |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 10:23:31
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 10:23:31.430615    4656 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:23:31.431053    4656 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:23:31.431060    4656 out.go:358] Setting ErrFile to fd 2...
	I0816 10:23:31.431065    4656 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:23:31.431301    4656 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1276/.minikube/bin
	I0816 10:23:31.432961    4656 out.go:352] Setting JSON to false
	I0816 10:23:31.457337    4656 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":3181,"bootTime":1723825830,"procs":437,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0816 10:23:31.457435    4656 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 10:23:31.479716    4656 out.go:177] * [ha-286000] minikube v1.33.1 on Darwin 14.6.1
	I0816 10:23:31.522521    4656 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 10:23:31.522577    4656 notify.go:220] Checking for updates...
	I0816 10:23:31.567096    4656 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1276/kubeconfig
	I0816 10:23:31.588384    4656 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0816 10:23:31.609442    4656 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 10:23:31.630204    4656 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1276/.minikube
	I0816 10:23:31.651227    4656 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 10:23:31.673167    4656 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:23:31.673335    4656 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 10:23:31.674026    4656 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:23:31.674118    4656 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:23:31.683709    4656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52161
	I0816 10:23:31.684063    4656 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:23:31.684452    4656 main.go:141] libmachine: Using API Version  1
	I0816 10:23:31.684463    4656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:23:31.684744    4656 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:23:31.684873    4656 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:23:31.714156    4656 out.go:177] * Using the hyperkit driver based on existing profile
	I0816 10:23:31.756393    4656 start.go:297] selected driver: hyperkit
	I0816 10:23:31.756421    4656 start.go:901] validating driver "hyperkit" against &{Name:ha-286000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:ha-286000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:fals
e efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 10:23:31.756672    4656 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 10:23:31.756879    4656 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:23:31.757097    4656 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19461-1276/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0816 10:23:31.766849    4656 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0816 10:23:31.772699    4656 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:23:31.772722    4656 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0816 10:23:31.776315    4656 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 10:23:31.776385    4656 cni.go:84] Creating CNI manager for ""
	I0816 10:23:31.776395    4656 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0816 10:23:31.776475    4656 start.go:340] cluster config:
	{Name:ha-286000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-286000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-t
iller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 10:23:31.776573    4656 iso.go:125] acquiring lock: {Name:mkb430b43b5e7a8a683454482519837ed996c78d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:23:31.798308    4656 out.go:177] * Starting "ha-286000" primary control-plane node in "ha-286000" cluster
	I0816 10:23:31.820262    4656 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 10:23:31.820333    4656 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0816 10:23:31.820361    4656 cache.go:56] Caching tarball of preloaded images
	I0816 10:23:31.820552    4656 preload.go:172] Found /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0816 10:23:31.820569    4656 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 10:23:31.820757    4656 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:23:31.821672    4656 start.go:360] acquireMachinesLock for ha-286000: {Name:mk0510f0432b1e24fd07d899155e85dff8756d5a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:23:31.821789    4656 start.go:364] duration metric: took 93.411µs to acquireMachinesLock for "ha-286000"
	I0816 10:23:31.821826    4656 start.go:96] Skipping create...Using existing machine configuration
	I0816 10:23:31.821843    4656 fix.go:54] fixHost starting: 
	I0816 10:23:31.822296    4656 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:23:31.822326    4656 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:23:31.831598    4656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52163
	I0816 10:23:31.831979    4656 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:23:31.832360    4656 main.go:141] libmachine: Using API Version  1
	I0816 10:23:31.832373    4656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:23:31.832622    4656 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:23:31.832766    4656 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:23:31.832876    4656 main.go:141] libmachine: (ha-286000) Calling .GetState
	I0816 10:23:31.832983    4656 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:23:31.833087    4656 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:23:31.834009    4656 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid 3771 missing from process table
	I0816 10:23:31.834044    4656 fix.go:112] recreateIfNeeded on ha-286000: state=Stopped err=<nil>
	I0816 10:23:31.834061    4656 main.go:141] libmachine: (ha-286000) Calling .DriverName
	W0816 10:23:31.834156    4656 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 10:23:31.892140    4656 out.go:177] * Restarting existing hyperkit VM for "ha-286000" ...
	I0816 10:23:31.931475    4656 main.go:141] libmachine: (ha-286000) Calling .Start
	I0816 10:23:31.931796    4656 main.go:141] libmachine: (ha-286000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/hyperkit.pid
	I0816 10:23:31.931814    4656 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:23:31.933360    4656 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid 3771 missing from process table
	I0816 10:23:31.933379    4656 main.go:141] libmachine: (ha-286000) DBG | pid 3771 is in state "Stopped"
	I0816 10:23:31.933400    4656 main.go:141] libmachine: (ha-286000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/hyperkit.pid...
	I0816 10:23:31.934010    4656 main.go:141] libmachine: (ha-286000) DBG | Using UUID ad96de67-e238-408c-89eb-d74e5b68d297
	I0816 10:23:32.043909    4656 main.go:141] libmachine: (ha-286000) DBG | Generated MAC 66:c8:48:4e:12:1b
	I0816 10:23:32.043928    4656 main.go:141] libmachine: (ha-286000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000
	I0816 10:23:32.044052    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:32 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"ad96de67-e238-408c-89eb-d74e5b68d297", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003a89c0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 10:23:32.044084    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:32 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"ad96de67-e238-408c-89eb-d74e5b68d297", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003a89c0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 10:23:32.044134    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:32 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "ad96de67-e238-408c-89eb-d74e5b68d297", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/ha-286000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/bzimage,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000"}
	I0816 10:23:32.044180    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:32 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U ad96de67-e238-408c-89eb-d74e5b68d297 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/ha-286000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/console-ring -f kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/bzimage,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000"
	I0816 10:23:32.044192    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:32 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0816 10:23:32.045646    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:32 DEBUG: hyperkit: Pid is 4669
	I0816 10:23:32.046030    4656 main.go:141] libmachine: (ha-286000) DBG | Attempt 0
	I0816 10:23:32.046046    4656 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:23:32.046146    4656 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 4669
	I0816 10:23:32.048140    4656 main.go:141] libmachine: (ha-286000) DBG | Searching for 66:c8:48:4e:12:1b in /var/db/dhcpd_leases ...
	I0816 10:23:32.048193    4656 main.go:141] libmachine: (ha-286000) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0816 10:23:32.048231    4656 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dbd7}
	I0816 10:23:32.048249    4656 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:23:32.048272    4656 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66c0d7f9}
	I0816 10:23:32.048286    4656 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:23:32.048293    4656 main.go:141] libmachine: (ha-286000) DBG | Found match: 66:c8:48:4e:12:1b
	I0816 10:23:32.048301    4656 main.go:141] libmachine: (ha-286000) DBG | IP: 192.169.0.5
	I0816 10:23:32.048382    4656 main.go:141] libmachine: (ha-286000) Calling .GetConfigRaw
	I0816 10:23:32.049597    4656 main.go:141] libmachine: (ha-286000) Calling .GetIP
	I0816 10:23:32.049816    4656 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:23:32.050246    4656 machine.go:93] provisionDockerMachine start ...
	I0816 10:23:32.050258    4656 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:23:32.050395    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:23:32.050512    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:23:32.050602    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:23:32.050694    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:23:32.050788    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:23:32.050933    4656 main.go:141] libmachine: Using SSH client type: native
	I0816 10:23:32.051148    4656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e87ea0] 0x3e8ac00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:23:32.051157    4656 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 10:23:32.053822    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:32 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0816 10:23:32.105618    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:32 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0816 10:23:32.106644    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:23:32.106664    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:23:32.106672    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:23:32.106681    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:23:32.488273    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:32 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0816 10:23:32.488286    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:32 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0816 10:23:32.602925    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:23:32.602945    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:23:32.602968    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:23:32.603003    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:23:32.603842    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:32 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0816 10:23:32.603853    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:32 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0816 10:23:38.196809    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:38 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0816 10:23:38.196887    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:38 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0816 10:23:38.196898    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:38 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0816 10:23:38.223115    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:38 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0816 10:23:43.125906    4656 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 10:23:43.125920    4656 main.go:141] libmachine: (ha-286000) Calling .GetMachineName
	I0816 10:23:43.126080    4656 buildroot.go:166] provisioning hostname "ha-286000"
	I0816 10:23:43.126090    4656 main.go:141] libmachine: (ha-286000) Calling .GetMachineName
	I0816 10:23:43.126193    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:23:43.126289    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:23:43.126427    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:23:43.126532    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:23:43.126633    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:23:43.126763    4656 main.go:141] libmachine: Using SSH client type: native
	I0816 10:23:43.126897    4656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e87ea0] 0x3e8ac00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:23:43.126905    4656 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-286000 && echo "ha-286000" | sudo tee /etc/hostname
	I0816 10:23:43.200672    4656 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-286000
	
	I0816 10:23:43.200691    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:23:43.200824    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:23:43.200934    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:23:43.201035    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:23:43.201146    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:23:43.201266    4656 main.go:141] libmachine: Using SSH client type: native
	I0816 10:23:43.201423    4656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e87ea0] 0x3e8ac00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:23:43.201434    4656 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-286000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-286000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-286000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 10:23:43.272382    4656 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 10:23:43.272403    4656 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19461-1276/.minikube CaCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19461-1276/.minikube}
	I0816 10:23:43.272418    4656 buildroot.go:174] setting up certificates
	I0816 10:23:43.272432    4656 provision.go:84] configureAuth start
	I0816 10:23:43.272440    4656 main.go:141] libmachine: (ha-286000) Calling .GetMachineName
	I0816 10:23:43.272576    4656 main.go:141] libmachine: (ha-286000) Calling .GetIP
	I0816 10:23:43.272680    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:23:43.272769    4656 provision.go:143] copyHostCerts
	I0816 10:23:43.272801    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem
	I0816 10:23:43.272890    4656 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem, removing ...
	I0816 10:23:43.272898    4656 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem
	I0816 10:23:43.273149    4656 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem (1078 bytes)
	I0816 10:23:43.273406    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem
	I0816 10:23:43.273447    4656 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem, removing ...
	I0816 10:23:43.273452    4656 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem
	I0816 10:23:43.273542    4656 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem (1123 bytes)
	I0816 10:23:43.273700    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem
	I0816 10:23:43.273746    4656 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem, removing ...
	I0816 10:23:43.273751    4656 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem
	I0816 10:23:43.273833    4656 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem (1679 bytes)
	I0816 10:23:43.274002    4656 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem org=jenkins.ha-286000 san=[127.0.0.1 192.169.0.5 ha-286000 localhost minikube]
	I0816 10:23:43.350973    4656 provision.go:177] copyRemoteCerts
	I0816 10:23:43.351030    4656 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 10:23:43.351047    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:23:43.351198    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:23:43.351290    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:23:43.351418    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:23:43.351516    4656 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:23:43.390290    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0816 10:23:43.390367    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0816 10:23:43.409250    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0816 10:23:43.409310    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0816 10:23:43.428428    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0816 10:23:43.428486    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 10:23:43.447295    4656 provision.go:87] duration metric: took 174.931658ms to configureAuth
	I0816 10:23:43.447308    4656 buildroot.go:189] setting minikube options for container-runtime
	I0816 10:23:43.447492    4656 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:23:43.447506    4656 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:23:43.447636    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:23:43.447734    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:23:43.447819    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:23:43.447898    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:23:43.447976    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:23:43.448093    4656 main.go:141] libmachine: Using SSH client type: native
	I0816 10:23:43.448217    4656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e87ea0] 0x3e8ac00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:23:43.448225    4656 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0816 10:23:43.510056    4656 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0816 10:23:43.510072    4656 buildroot.go:70] root file system type: tmpfs
	I0816 10:23:43.510138    4656 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0816 10:23:43.510152    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:23:43.510280    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:23:43.510367    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:23:43.510466    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:23:43.510546    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:23:43.510704    4656 main.go:141] libmachine: Using SSH client type: native
	I0816 10:23:43.510847    4656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e87ea0] 0x3e8ac00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:23:43.510894    4656 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0816 10:23:43.585463    4656 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0816 10:23:43.585485    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:23:43.585612    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:23:43.585708    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:23:43.585797    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:23:43.585881    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:23:43.585994    4656 main.go:141] libmachine: Using SSH client type: native
	I0816 10:23:43.586142    4656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e87ea0] 0x3e8ac00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:23:43.586155    4656 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0816 10:23:45.281245    4656 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0816 10:23:45.281272    4656 machine.go:96] duration metric: took 13.233954511s to provisionDockerMachine
	I0816 10:23:45.281282    4656 start.go:293] postStartSetup for "ha-286000" (driver="hyperkit")
	I0816 10:23:45.281290    4656 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 10:23:45.281301    4656 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:23:45.281477    4656 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 10:23:45.281497    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:23:45.281579    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:23:45.281672    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:23:45.281756    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:23:45.281830    4656 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:23:45.322349    4656 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 10:23:45.325873    4656 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 10:23:45.325888    4656 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19461-1276/.minikube/addons for local assets ...
	I0816 10:23:45.326003    4656 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19461-1276/.minikube/files for local assets ...
	I0816 10:23:45.326184    4656 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> 18312.pem in /etc/ssl/certs
	I0816 10:23:45.326190    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> /etc/ssl/certs/18312.pem
	I0816 10:23:45.326400    4656 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 10:23:45.335377    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem --> /etc/ssl/certs/18312.pem (1708 bytes)
	I0816 10:23:45.364973    4656 start.go:296] duration metric: took 83.714414ms for postStartSetup
	I0816 10:23:45.365002    4656 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:23:45.365179    4656 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0816 10:23:45.365192    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:23:45.365284    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:23:45.365363    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:23:45.365463    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:23:45.365567    4656 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:23:45.403540    4656 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0816 10:23:45.403604    4656 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0816 10:23:45.456725    4656 fix.go:56] duration metric: took 13.637911557s for fixHost
	I0816 10:23:45.456746    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:23:45.456881    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:23:45.456970    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:23:45.457077    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:23:45.457170    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:23:45.457308    4656 main.go:141] libmachine: Using SSH client type: native
	I0816 10:23:45.457449    4656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e87ea0] 0x3e8ac00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:23:45.457456    4656 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 10:23:45.520497    4656 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723829025.657632114
	
	I0816 10:23:45.520510    4656 fix.go:216] guest clock: 1723829025.657632114
	I0816 10:23:45.520516    4656 fix.go:229] Guest: 2024-08-16 10:23:45.657632114 -0700 PDT Remote: 2024-08-16 10:23:45.456737 -0700 PDT m=+14.070866227 (delta=200.895114ms)
	I0816 10:23:45.520533    4656 fix.go:200] guest clock delta is within tolerance: 200.895114ms
	I0816 10:23:45.520536    4656 start.go:83] releasing machines lock for "ha-286000", held for 13.701786252s
	I0816 10:23:45.520558    4656 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:23:45.520685    4656 main.go:141] libmachine: (ha-286000) Calling .GetIP
	I0816 10:23:45.520780    4656 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:23:45.521071    4656 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:23:45.521183    4656 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:23:45.521258    4656 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 10:23:45.521295    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:23:45.521314    4656 ssh_runner.go:195] Run: cat /version.json
	I0816 10:23:45.521325    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:23:45.521385    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:23:45.521413    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:23:45.521478    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:23:45.521492    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:23:45.521569    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:23:45.521588    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:23:45.521684    4656 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:23:45.521698    4656 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:23:45.608738    4656 ssh_runner.go:195] Run: systemctl --version
	I0816 10:23:45.613819    4656 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 10:23:45.618009    4656 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 10:23:45.618054    4656 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 10:23:45.630928    4656 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 10:23:45.630940    4656 start.go:495] detecting cgroup driver to use...
	I0816 10:23:45.631050    4656 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 10:23:45.647297    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0816 10:23:45.656185    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0816 10:23:45.664870    4656 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0816 10:23:45.664909    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0816 10:23:45.673735    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 10:23:45.682541    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0816 10:23:45.691093    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 10:23:45.699692    4656 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 10:23:45.708389    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0816 10:23:45.717214    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0816 10:23:45.726031    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0816 10:23:45.734772    4656 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 10:23:45.742525    4656 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 10:23:45.750474    4656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:23:45.857037    4656 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0816 10:23:45.876038    4656 start.go:495] detecting cgroup driver to use...
	I0816 10:23:45.876115    4656 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0816 10:23:45.891371    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 10:23:45.904769    4656 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 10:23:45.925222    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 10:23:45.935653    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 10:23:45.946111    4656 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0816 10:23:45.966114    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 10:23:45.976753    4656 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 10:23:45.991951    4656 ssh_runner.go:195] Run: which cri-dockerd
	I0816 10:23:45.995087    4656 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0816 10:23:46.002262    4656 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0816 10:23:46.015662    4656 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0816 10:23:46.113010    4656 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0816 10:23:46.220102    4656 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0816 10:23:46.220181    4656 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0816 10:23:46.234448    4656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:23:46.327392    4656 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0816 10:23:48.670555    4656 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.343962753s)
	I0816 10:23:48.670612    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0816 10:23:48.681270    4656 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0816 10:23:48.694180    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0816 10:23:48.704525    4656 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0816 10:23:48.796386    4656 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0816 10:23:48.896301    4656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:23:49.015732    4656 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0816 10:23:49.029308    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0816 10:23:49.039437    4656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:23:49.133284    4656 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0816 10:23:49.196413    4656 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0816 10:23:49.196492    4656 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0816 10:23:49.200987    4656 start.go:563] Will wait 60s for crictl version
	I0816 10:23:49.201034    4656 ssh_runner.go:195] Run: which crictl
	I0816 10:23:49.204272    4656 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 10:23:49.229772    4656 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0816 10:23:49.229851    4656 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0816 10:23:49.247799    4656 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0816 10:23:49.310834    4656 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0816 10:23:49.310884    4656 main.go:141] libmachine: (ha-286000) Calling .GetIP
	I0816 10:23:49.311324    4656 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0816 10:23:49.315940    4656 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 10:23:49.325830    4656 kubeadm.go:883] updating cluster {Name:ha-286000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:ha-286000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false f
reshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 10:23:49.325921    4656 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 10:23:49.325979    4656 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0816 10:23:49.344604    4656 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0816 10:23:49.344616    4656 docker.go:615] Images already preloaded, skipping extraction
	I0816 10:23:49.344689    4656 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0816 10:23:49.358019    4656 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0816 10:23:49.358039    4656 cache_images.go:84] Images are preloaded, skipping loading
	I0816 10:23:49.358049    4656 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.0 docker true true} ...
	I0816 10:23:49.358133    4656 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-286000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-286000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 10:23:49.358200    4656 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0816 10:23:49.396733    4656 cni.go:84] Creating CNI manager for ""
	I0816 10:23:49.396746    4656 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0816 10:23:49.396758    4656 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 10:23:49.396773    4656 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-286000 NodeName:ha-286000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 10:23:49.396858    4656 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-286000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 10:23:49.396876    4656 kube-vip.go:115] generating kube-vip config ...
	I0816 10:23:49.396930    4656 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0816 10:23:49.409760    4656 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0816 10:23:49.409827    4656 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0816 10:23:49.409880    4656 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 10:23:49.417741    4656 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 10:23:49.417784    4656 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0816 10:23:49.425178    4656 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0816 10:23:49.438709    4656 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 10:23:49.451834    4656 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0816 10:23:49.465615    4656 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0816 10:23:49.478992    4656 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0816 10:23:49.481872    4656 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 10:23:49.491581    4656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:23:49.591270    4656 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 10:23:49.605166    4656 certs.go:68] Setting up /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000 for IP: 192.169.0.5
	I0816 10:23:49.605178    4656 certs.go:194] generating shared ca certs ...
	I0816 10:23:49.605204    4656 certs.go:226] acquiring lock for ca certs: {Name:mka549fbc467849834d2267da204028c49d88a3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:23:49.605373    4656 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key
	I0816 10:23:49.605447    4656 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key
	I0816 10:23:49.605458    4656 certs.go:256] generating profile certs ...
	I0816 10:23:49.605548    4656 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.key
	I0816 10:23:49.605569    4656 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.1abcce66
	I0816 10:23:49.605590    4656 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.1abcce66 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.7 192.169.0.254]
	I0816 10:23:49.872724    4656 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.1abcce66 ...
	I0816 10:23:49.872746    4656 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.1abcce66: {Name:mk52a3c288948ed76c5e0c3d52d6b4bf6d85dac4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:23:49.873234    4656 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.1abcce66 ...
	I0816 10:23:49.873246    4656 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.1abcce66: {Name:mk4d6d8f8e53e86a8e5b1aff2a47e28c9af375aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:23:49.873462    4656 certs.go:381] copying /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.1abcce66 -> /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt
	I0816 10:23:49.873670    4656 certs.go:385] copying /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.1abcce66 -> /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key
	I0816 10:23:49.873917    4656 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key
	I0816 10:23:49.873927    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0816 10:23:49.873950    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0816 10:23:49.873969    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0816 10:23:49.873988    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0816 10:23:49.874005    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0816 10:23:49.874022    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0816 10:23:49.874039    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0816 10:23:49.874056    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0816 10:23:49.874155    4656 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem (1338 bytes)
	W0816 10:23:49.874204    4656 certs.go:480] ignoring /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831_empty.pem, impossibly tiny 0 bytes
	I0816 10:23:49.874213    4656 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 10:23:49.874243    4656 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem (1078 bytes)
	I0816 10:23:49.874272    4656 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem (1123 bytes)
	I0816 10:23:49.874303    4656 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem (1679 bytes)
	I0816 10:23:49.874365    4656 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem (1708 bytes)
	I0816 10:23:49.874404    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem -> /usr/share/ca-certificates/1831.pem
	I0816 10:23:49.874426    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> /usr/share/ca-certificates/18312.pem
	I0816 10:23:49.874445    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:23:49.874951    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 10:23:49.894591    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0816 10:23:49.949362    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 10:23:50.001129    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 10:23:50.031447    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0816 10:23:50.051861    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 10:23:50.072126    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 10:23:50.092020    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 10:23:50.111735    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem --> /usr/share/ca-certificates/1831.pem (1338 bytes)
	I0816 10:23:50.131448    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem --> /usr/share/ca-certificates/18312.pem (1708 bytes)
	I0816 10:23:50.150204    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 10:23:50.170431    4656 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 10:23:50.183792    4656 ssh_runner.go:195] Run: openssl version
	I0816 10:23:50.188069    4656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18312.pem && ln -fs /usr/share/ca-certificates/18312.pem /etc/ssl/certs/18312.pem"
	I0816 10:23:50.196462    4656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18312.pem
	I0816 10:23:50.199930    4656 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 16:57 /usr/share/ca-certificates/18312.pem
	I0816 10:23:50.199966    4656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18312.pem
	I0816 10:23:50.204340    4656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18312.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 10:23:50.212595    4656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 10:23:50.220934    4656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:23:50.224472    4656 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 16:48 /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:23:50.224507    4656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:23:50.228762    4656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 10:23:50.237224    4656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1831.pem && ln -fs /usr/share/ca-certificates/1831.pem /etc/ssl/certs/1831.pem"
	I0816 10:23:50.245558    4656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1831.pem
	I0816 10:23:50.249052    4656 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 16:57 /usr/share/ca-certificates/1831.pem
	I0816 10:23:50.249090    4656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1831.pem
	I0816 10:23:50.253505    4656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1831.pem /etc/ssl/certs/51391683.0"
	I0816 10:23:50.261784    4656 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 10:23:50.265339    4656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 10:23:50.269761    4656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 10:23:50.273967    4656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 10:23:50.278404    4656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 10:23:50.282734    4656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 10:23:50.286959    4656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 10:23:50.291328    4656 kubeadm.go:392] StartCluster: {Name:ha-286000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:ha-286000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 10:23:50.291439    4656 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0816 10:23:50.308917    4656 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 10:23:50.316477    4656 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 10:23:50.316487    4656 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 10:23:50.316521    4656 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 10:23:50.324768    4656 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 10:23:50.325077    4656 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-286000" does not appear in /Users/jenkins/minikube-integration/19461-1276/kubeconfig
	I0816 10:23:50.325160    4656 kubeconfig.go:62] /Users/jenkins/minikube-integration/19461-1276/kubeconfig needs updating (will repair): [kubeconfig missing "ha-286000" cluster setting kubeconfig missing "ha-286000" context setting]
	I0816 10:23:50.325346    4656 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/kubeconfig: {Name:mk7f4491c8ec5cf96c96a5a1a5dbfba4301394a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:23:50.325844    4656 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19461-1276/kubeconfig
	I0816 10:23:50.326042    4656 kapi.go:59] client config for ha-286000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.key", CAFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x5540f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0816 10:23:50.326340    4656 cert_rotation.go:140] Starting client certificate rotation controller
	I0816 10:23:50.326539    4656 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 10:23:50.333744    4656 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.5
	I0816 10:23:50.333758    4656 kubeadm.go:597] duration metric: took 17.27164ms to restartPrimaryControlPlane
	I0816 10:23:50.333763    4656 kubeadm.go:394] duration metric: took 42.452811ms to StartCluster
	I0816 10:23:50.333775    4656 settings.go:142] acquiring lock: {Name:mk60e377244eac756871b74b6bd45115654652a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:23:50.333847    4656 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19461-1276/kubeconfig
	I0816 10:23:50.334196    4656 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/kubeconfig: {Name:mk7f4491c8ec5cf96c96a5a1a5dbfba4301394a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:23:50.334417    4656 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:23:50.334430    4656 start.go:241] waiting for startup goroutines ...
	I0816 10:23:50.334436    4656 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 10:23:50.334546    4656 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:23:50.378007    4656 out.go:177] * Enabled addons: 
	I0816 10:23:50.399051    4656 addons.go:510] duration metric: took 64.628768ms for enable addons: enabled=[]
	I0816 10:23:50.399122    4656 start.go:246] waiting for cluster config update ...
	I0816 10:23:50.399134    4656 start.go:255] writing updated cluster config ...
	I0816 10:23:50.421150    4656 out.go:201] 
	I0816 10:23:50.443594    4656 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:23:50.443722    4656 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:23:50.466091    4656 out.go:177] * Starting "ha-286000-m02" control-plane node in "ha-286000" cluster
	I0816 10:23:50.507896    4656 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 10:23:50.507978    4656 cache.go:56] Caching tarball of preloaded images
	I0816 10:23:50.508166    4656 preload.go:172] Found /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0816 10:23:50.508183    4656 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 10:23:50.508305    4656 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:23:50.509238    4656 start.go:360] acquireMachinesLock for ha-286000-m02: {Name:mk0510f0432b1e24fd07d899155e85dff8756d5a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:23:50.509340    4656 start.go:364] duration metric: took 77.349µs to acquireMachinesLock for "ha-286000-m02"
	I0816 10:23:50.509364    4656 start.go:96] Skipping create...Using existing machine configuration
	I0816 10:23:50.509373    4656 fix.go:54] fixHost starting: m02
	I0816 10:23:50.509785    4656 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:23:50.509813    4656 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:23:50.519278    4656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52185
	I0816 10:23:50.519808    4656 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:23:50.520224    4656 main.go:141] libmachine: Using API Version  1
	I0816 10:23:50.520241    4656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:23:50.520527    4656 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:23:50.520742    4656 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:23:50.520847    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetState
	I0816 10:23:50.520930    4656 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:23:50.521027    4656 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid from json: 4408
	I0816 10:23:50.521973    4656 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid 4408 missing from process table
	I0816 10:23:50.522001    4656 fix.go:112] recreateIfNeeded on ha-286000-m02: state=Stopped err=<nil>
	I0816 10:23:50.522008    4656 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	W0816 10:23:50.522113    4656 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 10:23:50.564905    4656 out.go:177] * Restarting existing hyperkit VM for "ha-286000-m02" ...
	I0816 10:23:50.585936    4656 main.go:141] libmachine: (ha-286000-m02) Calling .Start
	I0816 10:23:50.586207    4656 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:23:50.586317    4656 main.go:141] libmachine: (ha-286000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/hyperkit.pid
	I0816 10:23:50.588008    4656 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid 4408 missing from process table
	I0816 10:23:50.588025    4656 main.go:141] libmachine: (ha-286000-m02) DBG | pid 4408 is in state "Stopped"
	I0816 10:23:50.588043    4656 main.go:141] libmachine: (ha-286000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/hyperkit.pid...
	I0816 10:23:50.588412    4656 main.go:141] libmachine: (ha-286000-m02) DBG | Using UUID f7630301-2bc3-4935-a751-ac72999da031
	I0816 10:23:50.615912    4656 main.go:141] libmachine: (ha-286000-m02) DBG | Generated MAC 72:69:8f:11:68:1d
	I0816 10:23:50.615934    4656 main.go:141] libmachine: (ha-286000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000
	I0816 10:23:50.616061    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:50 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"f7630301-2bc3-4935-a751-ac72999da031", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003aca20)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 10:23:50.616091    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:50 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"f7630301-2bc3-4935-a751-ac72999da031", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003aca20)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 10:23:50.616153    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:50 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "f7630301-2bc3-4935-a751-ac72999da031", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/ha-286000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/bzimage,/Users/jenkins/minikube-integration/19461-1276/.minikube/machine
s/ha-286000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000"}
	I0816 10:23:50.616186    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:50 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U f7630301-2bc3-4935-a751-ac72999da031 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/ha-286000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/bzimage,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000"
	I0816 10:23:50.616197    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:50 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0816 10:23:50.617617    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:50 DEBUG: hyperkit: Pid is 4678
	I0816 10:23:50.618129    4656 main.go:141] libmachine: (ha-286000-m02) DBG | Attempt 0
	I0816 10:23:50.618145    4656 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:23:50.618226    4656 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid from json: 4678
	I0816 10:23:50.620253    4656 main.go:141] libmachine: (ha-286000-m02) DBG | Searching for 72:69:8f:11:68:1d in /var/db/dhcpd_leases ...
	I0816 10:23:50.620318    4656 main.go:141] libmachine: (ha-286000-m02) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0816 10:23:50.620334    4656 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:23:50.620349    4656 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dbd7}
	I0816 10:23:50.620388    4656 main.go:141] libmachine: (ha-286000-m02) DBG | Found match: 72:69:8f:11:68:1d
	I0816 10:23:50.620402    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetConfigRaw
	I0816 10:23:50.620404    4656 main.go:141] libmachine: (ha-286000-m02) DBG | IP: 192.169.0.6
	I0816 10:23:50.621061    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetIP
	I0816 10:23:50.621271    4656 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:23:50.621639    4656 machine.go:93] provisionDockerMachine start ...
	I0816 10:23:50.621648    4656 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:23:50.621787    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:23:50.621898    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:23:50.622018    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:23:50.622130    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:23:50.622215    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:23:50.622373    4656 main.go:141] libmachine: Using SSH client type: native
	I0816 10:23:50.622508    4656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e87ea0] 0x3e8ac00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:23:50.622515    4656 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 10:23:50.625610    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:50 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0816 10:23:50.635240    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:50 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0816 10:23:50.636222    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:23:50.636239    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:23:50.636256    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:23:50.636268    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:23:51.016978    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:51 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0816 10:23:51.016996    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:51 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0816 10:23:51.131867    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:51 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:23:51.131882    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:51 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:23:51.131905    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:51 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:23:51.131915    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:51 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:23:51.132722    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:51 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0816 10:23:51.132732    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:51 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0816 10:23:56.691144    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:56 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0816 10:23:56.691211    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:56 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0816 10:23:56.691221    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:56 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0816 10:23:56.715157    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:56 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0816 10:24:01.691628    4656 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 10:24:01.691659    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetMachineName
	I0816 10:24:01.691824    4656 buildroot.go:166] provisioning hostname "ha-286000-m02"
	I0816 10:24:01.691835    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetMachineName
	I0816 10:24:01.691933    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:24:01.692024    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:24:01.692118    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:24:01.692216    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:24:01.692322    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:24:01.692468    4656 main.go:141] libmachine: Using SSH client type: native
	I0816 10:24:01.692634    4656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e87ea0] 0x3e8ac00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:24:01.692662    4656 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-286000-m02 && echo "ha-286000-m02" | sudo tee /etc/hostname
	I0816 10:24:01.771215    4656 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-286000-m02
	
	I0816 10:24:01.771228    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:24:01.771358    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:24:01.771450    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:24:01.771545    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:24:01.771647    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:24:01.771778    4656 main.go:141] libmachine: Using SSH client type: native
	I0816 10:24:01.771942    4656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e87ea0] 0x3e8ac00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:24:01.771954    4656 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-286000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-286000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-286000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 10:24:01.843105    4656 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 10:24:01.843122    4656 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19461-1276/.minikube CaCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19461-1276/.minikube}
	I0816 10:24:01.843132    4656 buildroot.go:174] setting up certificates
	I0816 10:24:01.843138    4656 provision.go:84] configureAuth start
	I0816 10:24:01.843144    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetMachineName
	I0816 10:24:01.843278    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetIP
	I0816 10:24:01.843379    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:24:01.843473    4656 provision.go:143] copyHostCerts
	I0816 10:24:01.843506    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem
	I0816 10:24:01.843559    4656 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem, removing ...
	I0816 10:24:01.843565    4656 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem
	I0816 10:24:01.843699    4656 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem (1078 bytes)
	I0816 10:24:01.843904    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem
	I0816 10:24:01.843934    4656 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem, removing ...
	I0816 10:24:01.843938    4656 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem
	I0816 10:24:01.844006    4656 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem (1123 bytes)
	I0816 10:24:01.844155    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem
	I0816 10:24:01.844183    4656 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem, removing ...
	I0816 10:24:01.844188    4656 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem
	I0816 10:24:01.844260    4656 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem (1679 bytes)
	I0816 10:24:01.844439    4656 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem org=jenkins.ha-286000-m02 san=[127.0.0.1 192.169.0.6 ha-286000-m02 localhost minikube]
	I0816 10:24:02.337393    4656 provision.go:177] copyRemoteCerts
	I0816 10:24:02.337441    4656 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 10:24:02.337455    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:24:02.337604    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:24:02.337706    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:24:02.337804    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:24:02.337897    4656 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/id_rsa Username:docker}
	I0816 10:24:02.378639    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0816 10:24:02.378714    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 10:24:02.398417    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0816 10:24:02.398480    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0816 10:24:02.418213    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0816 10:24:02.418277    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0816 10:24:02.438096    4656 provision.go:87] duration metric: took 595.044673ms to configureAuth
	I0816 10:24:02.438110    4656 buildroot.go:189] setting minikube options for container-runtime
	I0816 10:24:02.438277    4656 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:24:02.438294    4656 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:24:02.438430    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:24:02.438542    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:24:02.438634    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:24:02.438711    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:24:02.438803    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:24:02.438923    4656 main.go:141] libmachine: Using SSH client type: native
	I0816 10:24:02.439049    4656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e87ea0] 0x3e8ac00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:24:02.439057    4656 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0816 10:24:02.506619    4656 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0816 10:24:02.506630    4656 buildroot.go:70] root file system type: tmpfs
	I0816 10:24:02.506699    4656 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0816 10:24:02.506717    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:24:02.506855    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:24:02.506952    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:24:02.507065    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:24:02.507163    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:24:02.507316    4656 main.go:141] libmachine: Using SSH client type: native
	I0816 10:24:02.507497    4656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e87ea0] 0x3e8ac00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:24:02.507542    4656 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0816 10:24:02.585569    4656 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0816 10:24:02.585592    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:24:02.585731    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:24:02.585811    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:24:02.585904    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:24:02.585995    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:24:02.586114    4656 main.go:141] libmachine: Using SSH client type: native
	I0816 10:24:02.586256    4656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e87ea0] 0x3e8ac00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:24:02.586268    4656 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0816 10:24:04.282251    4656 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0816 10:24:04.282267    4656 machine.go:96] duration metric: took 13.663433605s to provisionDockerMachine
	I0816 10:24:04.282274    4656 start.go:293] postStartSetup for "ha-286000-m02" (driver="hyperkit")
	I0816 10:24:04.282282    4656 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 10:24:04.282291    4656 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:24:04.282476    4656 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 10:24:04.282490    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:24:04.282590    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:24:04.282676    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:24:04.282759    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:24:04.282862    4656 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/id_rsa Username:docker}
	I0816 10:24:04.323177    4656 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 10:24:04.326227    4656 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 10:24:04.326238    4656 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19461-1276/.minikube/addons for local assets ...
	I0816 10:24:04.326327    4656 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19461-1276/.minikube/files for local assets ...
	I0816 10:24:04.326475    4656 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> 18312.pem in /etc/ssl/certs
	I0816 10:24:04.326481    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> /etc/ssl/certs/18312.pem
	I0816 10:24:04.326635    4656 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 10:24:04.333923    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem --> /etc/ssl/certs/18312.pem (1708 bytes)
	I0816 10:24:04.354007    4656 start.go:296] duration metric: took 71.735624ms for postStartSetup
	I0816 10:24:04.354029    4656 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:24:04.354205    4656 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0816 10:24:04.354219    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:24:04.354303    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:24:04.354400    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:24:04.354484    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:24:04.354570    4656 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/id_rsa Username:docker}
	I0816 10:24:04.394664    4656 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0816 10:24:04.394719    4656 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0816 10:24:04.426272    4656 fix.go:56] duration metric: took 13.919762029s for fixHost
	I0816 10:24:04.426298    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:24:04.426444    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:24:04.426552    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:24:04.426653    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:24:04.426754    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:24:04.426882    4656 main.go:141] libmachine: Using SSH client type: native
	I0816 10:24:04.427028    4656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e87ea0] 0x3e8ac00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:24:04.427036    4656 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 10:24:04.493811    4656 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723829044.518955224
	
	I0816 10:24:04.493822    4656 fix.go:216] guest clock: 1723829044.518955224
	I0816 10:24:04.493832    4656 fix.go:229] Guest: 2024-08-16 10:24:04.518955224 -0700 PDT Remote: 2024-08-16 10:24:04.426286 -0700 PDT m=+33.045019463 (delta=92.669224ms)
	I0816 10:24:04.493843    4656 fix.go:200] guest clock delta is within tolerance: 92.669224ms
	I0816 10:24:04.493847    4656 start.go:83] releasing machines lock for "ha-286000-m02", held for 13.987372778s
	I0816 10:24:04.493864    4656 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:24:04.494002    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetIP
	I0816 10:24:04.518312    4656 out.go:177] * Found network options:
	I0816 10:24:04.540563    4656 out.go:177]   - NO_PROXY=192.169.0.5
	W0816 10:24:04.562476    4656 proxy.go:119] fail to check proxy env: Error ip not in block
	I0816 10:24:04.562514    4656 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:24:04.563369    4656 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:24:04.563631    4656 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:24:04.563760    4656 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 10:24:04.563821    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	W0816 10:24:04.563878    4656 proxy.go:119] fail to check proxy env: Error ip not in block
	I0816 10:24:04.563978    4656 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0816 10:24:04.563994    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:24:04.563998    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:24:04.564194    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:24:04.564230    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:24:04.564370    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:24:04.564412    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:24:04.564603    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:24:04.564677    4656 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/id_rsa Username:docker}
	I0816 10:24:04.564735    4656 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/id_rsa Username:docker}
	W0816 10:24:04.601353    4656 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 10:24:04.601410    4656 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 10:24:04.653940    4656 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 10:24:04.653960    4656 start.go:495] detecting cgroup driver to use...
	I0816 10:24:04.654084    4656 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 10:24:04.669702    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0816 10:24:04.678676    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0816 10:24:04.687652    4656 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0816 10:24:04.687695    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0816 10:24:04.696611    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 10:24:04.705567    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0816 10:24:04.714412    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 10:24:04.723256    4656 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 10:24:04.732202    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0816 10:24:04.746674    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0816 10:24:04.757904    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0816 10:24:04.767905    4656 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 10:24:04.779013    4656 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 10:24:04.790474    4656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:24:04.892919    4656 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0816 10:24:04.911874    4656 start.go:495] detecting cgroup driver to use...
	I0816 10:24:04.911946    4656 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0816 10:24:04.929416    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 10:24:04.941191    4656 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 10:24:04.954835    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 10:24:04.965605    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 10:24:04.976040    4656 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0816 10:24:05.001090    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 10:24:05.011999    4656 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 10:24:05.026893    4656 ssh_runner.go:195] Run: which cri-dockerd
	I0816 10:24:05.029920    4656 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0816 10:24:05.037094    4656 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0816 10:24:05.050742    4656 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0816 10:24:05.142175    4656 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0816 10:24:05.247816    4656 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0816 10:24:05.247843    4656 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0816 10:24:05.261875    4656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:24:05.354182    4656 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0816 10:24:07.691138    4656 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.337231062s)
	I0816 10:24:07.691198    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0816 10:24:07.701875    4656 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0816 10:24:07.715113    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0816 10:24:07.725351    4656 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0816 10:24:07.820462    4656 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0816 10:24:07.932462    4656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:24:08.044265    4656 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0816 10:24:08.057914    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0816 10:24:08.069171    4656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:24:08.165855    4656 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0816 10:24:08.229743    4656 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0816 10:24:08.229822    4656 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0816 10:24:08.234625    4656 start.go:563] Will wait 60s for crictl version
	I0816 10:24:08.234677    4656 ssh_runner.go:195] Run: which crictl
	I0816 10:24:08.237852    4656 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 10:24:08.262491    4656 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0816 10:24:08.262569    4656 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0816 10:24:08.282005    4656 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0816 10:24:08.324107    4656 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0816 10:24:08.365750    4656 out.go:177]   - env NO_PROXY=192.169.0.5
	I0816 10:24:08.386602    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetIP
	I0816 10:24:08.387035    4656 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0816 10:24:08.391617    4656 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 10:24:08.401981    4656 mustload.go:65] Loading cluster: ha-286000
	I0816 10:24:08.402159    4656 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:24:08.402381    4656 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:24:08.402414    4656 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:24:08.411266    4656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52207
	I0816 10:24:08.411600    4656 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:24:08.411912    4656 main.go:141] libmachine: Using API Version  1
	I0816 10:24:08.411923    4656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:24:08.412158    4656 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:24:08.412273    4656 main.go:141] libmachine: (ha-286000) Calling .GetState
	I0816 10:24:08.412350    4656 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:24:08.412439    4656 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 4669
	I0816 10:24:08.413371    4656 host.go:66] Checking if "ha-286000" exists ...
	I0816 10:24:08.413648    4656 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:24:08.413671    4656 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:24:08.422352    4656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52209
	I0816 10:24:08.422710    4656 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:24:08.423035    4656 main.go:141] libmachine: Using API Version  1
	I0816 10:24:08.423046    4656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:24:08.423253    4656 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:24:08.423365    4656 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:24:08.423454    4656 certs.go:68] Setting up /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000 for IP: 192.169.0.6
	I0816 10:24:08.423460    4656 certs.go:194] generating shared ca certs ...
	I0816 10:24:08.423469    4656 certs.go:226] acquiring lock for ca certs: {Name:mka549fbc467849834d2267da204028c49d88a3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:24:08.423616    4656 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key
	I0816 10:24:08.423685    4656 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key
	I0816 10:24:08.423693    4656 certs.go:256] generating profile certs ...
	I0816 10:24:08.423785    4656 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.key
	I0816 10:24:08.423872    4656 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.df014ba6
	I0816 10:24:08.423924    4656 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key
	I0816 10:24:08.423931    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0816 10:24:08.423952    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0816 10:24:08.423978    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0816 10:24:08.423996    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0816 10:24:08.424013    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0816 10:24:08.424031    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0816 10:24:08.424049    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0816 10:24:08.424065    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0816 10:24:08.424139    4656 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem (1338 bytes)
	W0816 10:24:08.424181    4656 certs.go:480] ignoring /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831_empty.pem, impossibly tiny 0 bytes
	I0816 10:24:08.424189    4656 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 10:24:08.424243    4656 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem (1078 bytes)
	I0816 10:24:08.424278    4656 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem (1123 bytes)
	I0816 10:24:08.424308    4656 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem (1679 bytes)
	I0816 10:24:08.424377    4656 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem (1708 bytes)
	I0816 10:24:08.424414    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem -> /usr/share/ca-certificates/1831.pem
	I0816 10:24:08.424439    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> /usr/share/ca-certificates/18312.pem
	I0816 10:24:08.424464    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:24:08.424490    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:24:08.424585    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:24:08.424670    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:24:08.424754    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:24:08.424829    4656 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:24:08.455631    4656 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0816 10:24:08.459165    4656 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0816 10:24:08.467170    4656 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0816 10:24:08.470222    4656 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0816 10:24:08.478239    4656 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0816 10:24:08.481358    4656 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0816 10:24:08.489236    4656 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0816 10:24:08.492402    4656 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0816 10:24:08.500317    4656 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0816 10:24:08.503508    4656 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0816 10:24:08.511673    4656 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0816 10:24:08.514769    4656 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0816 10:24:08.522766    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 10:24:08.542887    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0816 10:24:08.562071    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 10:24:08.581743    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 10:24:08.600945    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0816 10:24:08.620933    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 10:24:08.640254    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 10:24:08.659444    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 10:24:08.678715    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem --> /usr/share/ca-certificates/1831.pem (1338 bytes)
	I0816 10:24:08.697527    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem --> /usr/share/ca-certificates/18312.pem (1708 bytes)
	I0816 10:24:08.716988    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 10:24:08.735913    4656 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0816 10:24:08.749507    4656 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0816 10:24:08.763125    4656 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0816 10:24:08.776902    4656 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0816 10:24:08.790611    4656 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0816 10:24:08.804538    4656 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0816 10:24:08.817970    4656 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0816 10:24:08.831472    4656 ssh_runner.go:195] Run: openssl version
	I0816 10:24:08.835773    4656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18312.pem && ln -fs /usr/share/ca-certificates/18312.pem /etc/ssl/certs/18312.pem"
	I0816 10:24:08.845139    4656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18312.pem
	I0816 10:24:08.848508    4656 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 16:57 /usr/share/ca-certificates/18312.pem
	I0816 10:24:08.848545    4656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18312.pem
	I0816 10:24:08.852837    4656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18312.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 10:24:08.861881    4656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 10:24:08.870959    4656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:24:08.874362    4656 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 16:48 /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:24:08.874393    4656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:24:08.878676    4656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 10:24:08.887721    4656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1831.pem && ln -fs /usr/share/ca-certificates/1831.pem /etc/ssl/certs/1831.pem"
	I0816 10:24:08.896767    4656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1831.pem
	I0816 10:24:08.900184    4656 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 16:57 /usr/share/ca-certificates/1831.pem
	I0816 10:24:08.900218    4656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1831.pem
	I0816 10:24:08.904590    4656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1831.pem /etc/ssl/certs/51391683.0"
	I0816 10:24:08.913817    4656 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 10:24:08.917320    4656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 10:24:08.921592    4656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 10:24:08.925840    4656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 10:24:08.930232    4656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 10:24:08.934401    4656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 10:24:08.938749    4656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 10:24:08.943061    4656 kubeadm.go:934] updating node {m02 192.169.0.6 8443 v1.31.0 docker true true} ...
	I0816 10:24:08.943117    4656 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-286000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-286000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 10:24:08.943138    4656 kube-vip.go:115] generating kube-vip config ...
	I0816 10:24:08.943173    4656 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0816 10:24:08.956099    4656 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0816 10:24:08.956137    4656 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0816 10:24:08.956187    4656 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 10:24:08.964732    4656 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 10:24:08.964780    4656 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0816 10:24:08.972962    4656 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0816 10:24:08.986351    4656 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 10:24:08.999555    4656 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0816 10:24:09.013514    4656 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0816 10:24:09.016494    4656 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 10:24:09.026607    4656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:24:09.119324    4656 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 10:24:09.134140    4656 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:24:09.134339    4656 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:24:09.155614    4656 out.go:177] * Verifying Kubernetes components...
	I0816 10:24:09.197468    4656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:24:09.303306    4656 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 10:24:09.318292    4656 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19461-1276/kubeconfig
	I0816 10:24:09.318481    4656 kapi.go:59] client config for ha-286000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.key", CAFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x5540f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0816 10:24:09.318519    4656 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0816 10:24:09.318689    4656 node_ready.go:35] waiting up to 6m0s for node "ha-286000-m02" to be "Ready" ...
	I0816 10:24:09.318767    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:24:09.318772    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:09.318780    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:09.318783    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:18.478519    4656 round_trippers.go:574] Response Status: 200 OK in 9160 milliseconds
	I0816 10:24:18.479788    4656 node_ready.go:49] node "ha-286000-m02" has status "Ready":"True"
	I0816 10:24:18.479801    4656 node_ready.go:38] duration metric: took 9.161930596s for node "ha-286000-m02" to be "Ready" ...
	I0816 10:24:18.479809    4656 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 10:24:18.479841    4656 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0816 10:24:18.479849    4656 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0816 10:24:18.479888    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0816 10:24:18.479893    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:18.479899    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:18.479903    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:18.524673    4656 round_trippers.go:574] Response Status: 200 OK in 44 milliseconds
	I0816 10:24:18.529733    4656 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-2kqjf" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:18.529785    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-2kqjf
	I0816 10:24:18.529790    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:18.529807    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:18.529813    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:18.533009    4656 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:24:18.533408    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:24:18.533415    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:18.533421    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:18.533425    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:18.536536    4656 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:24:18.536873    4656 pod_ready.go:93] pod "coredns-6f6b679f8f-2kqjf" in "kube-system" namespace has status "Ready":"True"
	I0816 10:24:18.536881    4656 pod_ready.go:82] duration metric: took 7.13625ms for pod "coredns-6f6b679f8f-2kqjf" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:18.536890    4656 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-rfbz7" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:18.536923    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-rfbz7
	I0816 10:24:18.536928    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:18.536933    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:18.536936    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:18.538881    4656 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:24:18.539268    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:24:18.539275    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:18.539280    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:18.539283    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:18.541207    4656 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:24:18.541586    4656 pod_ready.go:93] pod "coredns-6f6b679f8f-rfbz7" in "kube-system" namespace has status "Ready":"True"
	I0816 10:24:18.541594    4656 pod_ready.go:82] duration metric: took 4.698747ms for pod "coredns-6f6b679f8f-rfbz7" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:18.541600    4656 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:18.541631    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-286000
	I0816 10:24:18.541636    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:18.541641    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:18.541646    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:18.543814    4656 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:24:18.544226    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:24:18.544232    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:18.544238    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:18.544241    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:18.546294    4656 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:24:18.546667    4656 pod_ready.go:93] pod "etcd-ha-286000" in "kube-system" namespace has status "Ready":"True"
	I0816 10:24:18.546676    4656 pod_ready.go:82] duration metric: took 5.071416ms for pod "etcd-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:18.546683    4656 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:18.546714    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-286000-m02
	I0816 10:24:18.546719    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:18.546724    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:18.546727    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:18.548810    4656 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:24:18.549180    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:24:18.549187    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:18.549193    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:18.549196    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:18.551164    4656 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:24:18.551594    4656 pod_ready.go:93] pod "etcd-ha-286000-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 10:24:18.551602    4656 pod_ready.go:82] duration metric: took 4.914791ms for pod "etcd-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:18.551612    4656 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:18.551646    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-286000
	I0816 10:24:18.551651    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:18.551657    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:18.551661    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:18.553736    4656 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:24:18.680501    4656 request.go:632] Waited for 126.254478ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:24:18.680609    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:24:18.680620    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:18.680631    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:18.680639    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:18.684350    4656 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:24:18.684850    4656 pod_ready.go:93] pod "kube-apiserver-ha-286000" in "kube-system" namespace has status "Ready":"True"
	I0816 10:24:18.684859    4656 pod_ready.go:82] duration metric: took 133.250923ms for pod "kube-apiserver-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:18.684865    4656 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:18.880626    4656 request.go:632] Waited for 195.713304ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-286000-m02
	I0816 10:24:18.880742    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-286000-m02
	I0816 10:24:18.880753    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:18.880765    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:18.880778    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:18.884447    4656 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:24:19.081261    4656 request.go:632] Waited for 196.182218ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:24:19.081349    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:24:19.081358    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:19.081368    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:19.081377    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:19.085528    4656 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 10:24:19.085961    4656 pod_ready.go:93] pod "kube-apiserver-ha-286000-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 10:24:19.085970    4656 pod_ready.go:82] duration metric: took 401.129633ms for pod "kube-apiserver-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:19.085977    4656 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:19.279954    4656 request.go:632] Waited for 193.926578ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-286000
	I0816 10:24:19.279986    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-286000
	I0816 10:24:19.279991    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:19.279997    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:19.280003    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:19.283105    4656 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:24:19.480663    4656 request.go:632] Waited for 196.83909ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:24:19.480698    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:24:19.480704    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:19.480710    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:19.480728    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:19.483828    4656 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:24:19.484258    4656 pod_ready.go:93] pod "kube-controller-manager-ha-286000" in "kube-system" namespace has status "Ready":"True"
	I0816 10:24:19.484269    4656 pod_ready.go:82] duration metric: took 398.316107ms for pod "kube-controller-manager-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:19.484276    4656 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:19.681917    4656 request.go:632] Waited for 197.597037ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-286000-m02
	I0816 10:24:19.682075    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-286000-m02
	I0816 10:24:19.682091    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:19.682103    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:19.682113    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:19.686127    4656 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 10:24:19.880667    4656 request.go:632] Waited for 193.865313ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:24:19.880730    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:24:19.880736    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:19.880742    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:19.880750    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:19.884780    4656 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 10:24:19.885298    4656 pod_ready.go:93] pod "kube-controller-manager-ha-286000-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 10:24:19.885308    4656 pod_ready.go:82] duration metric: took 401.055356ms for pod "kube-controller-manager-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:19.885315    4656 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5qhgk" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:20.081205    4656 request.go:632] Waited for 195.805147ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5qhgk
	I0816 10:24:20.081294    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5qhgk
	I0816 10:24:20.081304    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:20.081316    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:20.081321    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:20.085631    4656 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 10:24:20.280455    4656 request.go:632] Waited for 194.474574ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000-m04
	I0816 10:24:20.280532    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m04
	I0816 10:24:20.280539    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:20.280547    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:20.280552    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:20.287097    4656 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0816 10:24:20.287492    4656 pod_ready.go:93] pod "kube-proxy-5qhgk" in "kube-system" namespace has status "Ready":"True"
	I0816 10:24:20.287501    4656 pod_ready.go:82] duration metric: took 402.209883ms for pod "kube-proxy-5qhgk" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:20.287508    4656 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pt669" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:20.480572    4656 request.go:632] Waited for 193.037822ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pt669
	I0816 10:24:20.480648    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pt669
	I0816 10:24:20.480652    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:20.480659    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:20.480663    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:20.483171    4656 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:24:20.681664    4656 request.go:632] Waited for 198.111953ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:24:20.681763    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:24:20.681771    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:20.681779    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:20.681784    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:20.684372    4656 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:24:20.684693    4656 pod_ready.go:93] pod "kube-proxy-pt669" in "kube-system" namespace has status "Ready":"True"
	I0816 10:24:20.684702    4656 pod_ready.go:82] duration metric: took 397.216841ms for pod "kube-proxy-pt669" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:20.684712    4656 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-w4nt2" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:20.879782    4656 request.go:632] Waited for 195.039009ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w4nt2
	I0816 10:24:20.879907    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w4nt2
	I0816 10:24:20.879921    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:20.879933    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:20.879941    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:20.883394    4656 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:24:21.079930    4656 request.go:632] Waited for 195.888686ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:24:21.080028    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:24:21.080039    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:21.080050    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:21.080059    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:21.083488    4656 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:24:21.083893    4656 pod_ready.go:93] pod "kube-proxy-w4nt2" in "kube-system" namespace has status "Ready":"True"
	I0816 10:24:21.083903    4656 pod_ready.go:82] duration metric: took 399.212461ms for pod "kube-proxy-w4nt2" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:21.083911    4656 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:21.281558    4656 request.go:632] Waited for 197.607208ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-286000
	I0816 10:24:21.281628    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-286000
	I0816 10:24:21.281639    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:21.281648    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:21.281654    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:21.284223    4656 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:24:21.480419    4656 request.go:632] Waited for 195.838756ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:24:21.480514    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:24:21.480525    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:21.480537    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:21.480544    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:21.483887    4656 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:24:21.484430    4656 pod_ready.go:93] pod "kube-scheduler-ha-286000" in "kube-system" namespace has status "Ready":"True"
	I0816 10:24:21.484439    4656 pod_ready.go:82] duration metric: took 400.549346ms for pod "kube-scheduler-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:21.484446    4656 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:21.679727    4656 request.go:632] Waited for 195.252345ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-286000-m02
	I0816 10:24:21.679760    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-286000-m02
	I0816 10:24:21.679765    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:21.679769    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:21.679805    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:21.686476    4656 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0816 10:24:21.880162    4656 request.go:632] Waited for 193.203193ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:24:21.880221    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:24:21.880231    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:21.880247    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:21.880256    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:21.884015    4656 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:24:21.884602    4656 pod_ready.go:93] pod "kube-scheduler-ha-286000-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 10:24:21.884611    4656 pod_ready.go:82] duration metric: took 400.186514ms for pod "kube-scheduler-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:21.884619    4656 pod_ready.go:39] duration metric: took 3.405043457s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 10:24:21.884636    4656 api_server.go:52] waiting for apiserver process to appear ...
	I0816 10:24:21.884692    4656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 10:24:21.896175    4656 api_server.go:72] duration metric: took 12.763101701s to wait for apiserver process to appear ...
	I0816 10:24:21.896187    4656 api_server.go:88] waiting for apiserver healthz status ...
	I0816 10:24:21.896203    4656 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0816 10:24:21.900677    4656 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0816 10:24:21.900711    4656 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0816 10:24:21.900715    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:21.900720    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:21.900725    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:21.901496    4656 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0816 10:24:21.901599    4656 api_server.go:141] control plane version: v1.31.0
	I0816 10:24:21.901609    4656 api_server.go:131] duration metric: took 5.41777ms to wait for apiserver health ...
	I0816 10:24:21.901617    4656 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 10:24:22.081425    4656 request.go:632] Waited for 179.775499ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0816 10:24:22.081509    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0816 10:24:22.081521    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:22.081533    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:22.081542    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:22.087308    4656 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0816 10:24:22.090908    4656 system_pods.go:59] 19 kube-system pods found
	I0816 10:24:22.090924    4656 system_pods.go:61] "coredns-6f6b679f8f-2kqjf" [be18cd09-3e3b-4749-bf29-7001a879f593] Running
	I0816 10:24:22.090929    4656 system_pods.go:61] "coredns-6f6b679f8f-rfbz7" [a593e27a-e38b-46c7-a603-44963c31c095] Running
	I0816 10:24:22.090932    4656 system_pods.go:61] "etcd-ha-286000" [c45bc623-befe-4e88-8da1-bb4f7d7be5b2] Running
	I0816 10:24:22.090935    4656 system_pods.go:61] "etcd-ha-286000-m02" [e14fbe0c-4527-4cbb-8c13-01c0b32a8d15] Running
	I0816 10:24:22.090938    4656 system_pods.go:61] "kindnet-b9r6s" [07db3ed9-4355-45a2-be01-15273d773c65] Running
	I0816 10:24:22.090940    4656 system_pods.go:61] "kindnet-t9kjf" [20c57f28-5e86-44a4-8a2a-07e1e240abf2] Running
	I0816 10:24:22.090943    4656 system_pods.go:61] "kindnet-whqxb" [1cc5291b-52ea-44b5-b87c-607d46b5281a] Running
	I0816 10:24:22.090946    4656 system_pods.go:61] "kube-apiserver-ha-286000" [c0d298a9-931b-4084-9e5f-01a88de27456] Running
	I0816 10:24:22.090949    4656 system_pods.go:61] "kube-apiserver-ha-286000-m02" [3666c131-2b88-483a-ab8c-b18cb8db7d6f] Running
	I0816 10:24:22.090952    4656 system_pods.go:61] "kube-controller-manager-ha-286000" [5a4f4c5d-d89a-4e5f-b022-479ca31f64cd] Running
	I0816 10:24:22.090954    4656 system_pods.go:61] "kube-controller-manager-ha-286000-m02" [a347c3d4-fa47-49ea-93a0-83087017a2bd] Running
	I0816 10:24:22.090957    4656 system_pods.go:61] "kube-proxy-5qhgk" [da0cb383-9e0b-44cc-9f79-7861029514f7] Running
	I0816 10:24:22.090959    4656 system_pods.go:61] "kube-proxy-pt669" [798c956f-8f08-465a-b0d9-90b65f703f80] Running
	I0816 10:24:22.090962    4656 system_pods.go:61] "kube-proxy-w4nt2" [79ce2248-f8fd-4a3b-aeb7-f81d7ad64564] Running
	I0816 10:24:22.090967    4656 system_pods.go:61] "kube-scheduler-ha-286000" [b4af80e0-cbb0-4257-a212-3585faff0875] Running
	I0816 10:24:22.090971    4656 system_pods.go:61] "kube-scheduler-ha-286000-m02" [89920e93-c959-47ec-8a2c-f2b63e8c3d3a] Running
	I0816 10:24:22.090973    4656 system_pods.go:61] "kube-vip-ha-286000" [b0f85be2-2afb-44b0-a701-161b24529349] Running
	I0816 10:24:22.090976    4656 system_pods.go:61] "kube-vip-ha-286000-m02" [4982dc53-946d-4f75-8e9e-452ef39ff2fa] Running
	I0816 10:24:22.090978    4656 system_pods.go:61] "storage-provisioner" [4805d53b-2db3-4092-a3f2-d4a854e93adc] Running
	I0816 10:24:22.090983    4656 system_pods.go:74] duration metric: took 189.374292ms to wait for pod list to return data ...
	I0816 10:24:22.090989    4656 default_sa.go:34] waiting for default service account to be created ...
	I0816 10:24:22.280932    4656 request.go:632] Waited for 189.91131ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0816 10:24:22.280992    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0816 10:24:22.280998    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:22.281004    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:22.281007    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:22.286126    4656 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0816 10:24:22.286303    4656 default_sa.go:45] found service account: "default"
	I0816 10:24:22.286313    4656 default_sa.go:55] duration metric: took 195.332329ms for default service account to be created ...
	I0816 10:24:22.286320    4656 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 10:24:22.480087    4656 request.go:632] Waited for 193.706904ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0816 10:24:22.480149    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0816 10:24:22.480160    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:22.480172    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:22.480181    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:22.486391    4656 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0816 10:24:22.490416    4656 system_pods.go:86] 19 kube-system pods found
	I0816 10:24:22.490428    4656 system_pods.go:89] "coredns-6f6b679f8f-2kqjf" [be18cd09-3e3b-4749-bf29-7001a879f593] Running
	I0816 10:24:22.490432    4656 system_pods.go:89] "coredns-6f6b679f8f-rfbz7" [a593e27a-e38b-46c7-a603-44963c31c095] Running
	I0816 10:24:22.490435    4656 system_pods.go:89] "etcd-ha-286000" [c45bc623-befe-4e88-8da1-bb4f7d7be5b2] Running
	I0816 10:24:22.490443    4656 system_pods.go:89] "etcd-ha-286000-m02" [e14fbe0c-4527-4cbb-8c13-01c0b32a8d15] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0816 10:24:22.490447    4656 system_pods.go:89] "kindnet-b9r6s" [07db3ed9-4355-45a2-be01-15273d773c65] Running
	I0816 10:24:22.490454    4656 system_pods.go:89] "kindnet-t9kjf" [20c57f28-5e86-44a4-8a2a-07e1e240abf2] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0816 10:24:22.490458    4656 system_pods.go:89] "kindnet-whqxb" [1cc5291b-52ea-44b5-b87c-607d46b5281a] Running
	I0816 10:24:22.490462    4656 system_pods.go:89] "kube-apiserver-ha-286000" [c0d298a9-931b-4084-9e5f-01a88de27456] Running
	I0816 10:24:22.490466    4656 system_pods.go:89] "kube-apiserver-ha-286000-m02" [3666c131-2b88-483a-ab8c-b18cb8db7d6f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0816 10:24:22.490469    4656 system_pods.go:89] "kube-controller-manager-ha-286000" [5a4f4c5d-d89a-4e5f-b022-479ca31f64cd] Running
	I0816 10:24:22.490478    4656 system_pods.go:89] "kube-controller-manager-ha-286000-m02" [a347c3d4-fa47-49ea-93a0-83087017a2bd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0816 10:24:22.490483    4656 system_pods.go:89] "kube-proxy-5qhgk" [da0cb383-9e0b-44cc-9f79-7861029514f7] Running
	I0816 10:24:22.490487    4656 system_pods.go:89] "kube-proxy-pt669" [798c956f-8f08-465a-b0d9-90b65f703f80] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0816 10:24:22.490496    4656 system_pods.go:89] "kube-proxy-w4nt2" [79ce2248-f8fd-4a3b-aeb7-f81d7ad64564] Running
	I0816 10:24:22.490499    4656 system_pods.go:89] "kube-scheduler-ha-286000" [b4af80e0-cbb0-4257-a212-3585faff0875] Running
	I0816 10:24:22.490503    4656 system_pods.go:89] "kube-scheduler-ha-286000-m02" [89920e93-c959-47ec-8a2c-f2b63e8c3d3a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 10:24:22.490507    4656 system_pods.go:89] "kube-vip-ha-286000" [b0f85be2-2afb-44b0-a701-161b24529349] Running
	I0816 10:24:22.490511    4656 system_pods.go:89] "kube-vip-ha-286000-m02" [4982dc53-946d-4f75-8e9e-452ef39ff2fa] Running
	I0816 10:24:22.490514    4656 system_pods.go:89] "storage-provisioner" [4805d53b-2db3-4092-a3f2-d4a854e93adc] Running
	I0816 10:24:22.490518    4656 system_pods.go:126] duration metric: took 204.207739ms to wait for k8s-apps to be running ...
	I0816 10:24:22.490523    4656 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 10:24:22.490574    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 10:24:22.501971    4656 system_svc.go:56] duration metric: took 11.445041ms WaitForService to wait for kubelet
	I0816 10:24:22.501986    4656 kubeadm.go:582] duration metric: took 13.368953512s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 10:24:22.501997    4656 node_conditions.go:102] verifying NodePressure condition ...
	I0816 10:24:22.681633    4656 request.go:632] Waited for 179.608953ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0816 10:24:22.681696    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0816 10:24:22.681702    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:22.681708    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:22.681744    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:22.684771    4656 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:24:22.685508    4656 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 10:24:22.685523    4656 node_conditions.go:123] node cpu capacity is 2
	I0816 10:24:22.685532    4656 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 10:24:22.685535    4656 node_conditions.go:123] node cpu capacity is 2
	I0816 10:24:22.685538    4656 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 10:24:22.685541    4656 node_conditions.go:123] node cpu capacity is 2
	I0816 10:24:22.685544    4656 node_conditions.go:105] duration metric: took 183.55481ms to run NodePressure ...
	I0816 10:24:22.685552    4656 start.go:241] waiting for startup goroutines ...
	I0816 10:24:22.685571    4656 start.go:255] writing updated cluster config ...
	I0816 10:24:22.707964    4656 out.go:201] 
	I0816 10:24:22.729754    4656 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:24:22.729889    4656 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:24:22.752182    4656 out.go:177] * Starting "ha-286000-m03" control-plane node in "ha-286000" cluster
	I0816 10:24:22.794355    4656 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 10:24:22.794388    4656 cache.go:56] Caching tarball of preloaded images
	I0816 10:24:22.794595    4656 preload.go:172] Found /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0816 10:24:22.794623    4656 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 10:24:22.794796    4656 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:24:22.870926    4656 start.go:360] acquireMachinesLock for ha-286000-m03: {Name:mk0510f0432b1e24fd07d899155e85dff8756d5a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:24:22.871061    4656 start.go:364] duration metric: took 106.312µs to acquireMachinesLock for "ha-286000-m03"
	I0816 10:24:22.871092    4656 start.go:96] Skipping create...Using existing machine configuration
	I0816 10:24:22.871102    4656 fix.go:54] fixHost starting: m03
	I0816 10:24:22.871530    4656 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:24:22.871567    4656 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:24:22.881793    4656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52214
	I0816 10:24:22.882176    4656 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:24:22.882559    4656 main.go:141] libmachine: Using API Version  1
	I0816 10:24:22.882581    4656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:24:22.882800    4656 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:24:22.882926    4656 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:24:22.883020    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetState
	I0816 10:24:22.883103    4656 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:24:22.883215    4656 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid from json: 3849
	I0816 10:24:22.884141    4656 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid 3849 missing from process table
	I0816 10:24:22.884173    4656 fix.go:112] recreateIfNeeded on ha-286000-m03: state=Stopped err=<nil>
	I0816 10:24:22.884183    4656 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	W0816 10:24:22.884273    4656 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 10:24:22.934970    4656 out.go:177] * Restarting existing hyperkit VM for "ha-286000-m03" ...
	I0816 10:24:22.989195    4656 main.go:141] libmachine: (ha-286000-m03) Calling .Start
	I0816 10:24:22.989384    4656 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:24:22.989428    4656 main.go:141] libmachine: (ha-286000-m03) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/hyperkit.pid
	I0816 10:24:22.990416    4656 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid 3849 missing from process table
	I0816 10:24:22.990433    4656 main.go:141] libmachine: (ha-286000-m03) DBG | pid 3849 is in state "Stopped"
	I0816 10:24:22.990450    4656 main.go:141] libmachine: (ha-286000-m03) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/hyperkit.pid...
	I0816 10:24:22.991046    4656 main.go:141] libmachine: (ha-286000-m03) DBG | Using UUID 408efb18-ff91-428f-b414-6eb0d982a779
	I0816 10:24:23.018344    4656 main.go:141] libmachine: (ha-286000-m03) DBG | Generated MAC 8a:e:de:5b:b5:8b
	I0816 10:24:23.018367    4656 main.go:141] libmachine: (ha-286000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000
	I0816 10:24:23.018512    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:23 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"408efb18-ff91-428f-b414-6eb0d982a779", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003acb40)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 10:24:23.018542    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:23 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"408efb18-ff91-428f-b414-6eb0d982a779", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003acb40)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 10:24:23.018607    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:23 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "408efb18-ff91-428f-b414-6eb0d982a779", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/ha-286000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/bzimage,/Users/jenkins/minikube-integration/19461-1276/.minikube/machine
s/ha-286000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000"}
	I0816 10:24:23.018646    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:23 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 408efb18-ff91-428f-b414-6eb0d982a779 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/ha-286000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/bzimage,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000"
	I0816 10:24:23.018659    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:23 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0816 10:24:23.019982    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:23 DEBUG: hyperkit: Pid is 4694
	I0816 10:24:23.020375    4656 main.go:141] libmachine: (ha-286000-m03) DBG | Attempt 0
	I0816 10:24:23.020392    4656 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:24:23.020487    4656 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid from json: 4694
	I0816 10:24:23.022453    4656 main.go:141] libmachine: (ha-286000-m03) DBG | Searching for 8a:e:de:5b:b5:8b in /var/db/dhcpd_leases ...
	I0816 10:24:23.022498    4656 main.go:141] libmachine: (ha-286000-m03) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0816 10:24:23.022517    4656 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:24:23.022531    4656 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:24:23.022542    4656 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:24:23.022552    4656 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66c0d7f9}
	I0816 10:24:23.022566    4656 main.go:141] libmachine: (ha-286000-m03) DBG | Found match: 8a:e:de:5b:b5:8b
	I0816 10:24:23.022574    4656 main.go:141] libmachine: (ha-286000-m03) DBG | IP: 192.169.0.7
	I0816 10:24:23.022592    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetConfigRaw
	I0816 10:24:23.023252    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetIP
	I0816 10:24:23.023444    4656 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:24:23.023931    4656 machine.go:93] provisionDockerMachine start ...
	I0816 10:24:23.023941    4656 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:24:23.024079    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:24:23.024190    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:24:23.024302    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:24:23.024432    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:24:23.024554    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:24:23.024692    4656 main.go:141] libmachine: Using SSH client type: native
	I0816 10:24:23.024832    4656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e87ea0] 0x3e8ac00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:24:23.024839    4656 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 10:24:23.028441    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:23 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0816 10:24:23.037003    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:23 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0816 10:24:23.038503    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:24:23.038539    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:24:23.038554    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:24:23.038589    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:24:23.422756    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:23 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0816 10:24:23.422770    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:23 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0816 10:24:23.537534    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:24:23.537554    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:24:23.537563    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:24:23.537570    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:24:23.538449    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:23 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0816 10:24:23.538460    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:23 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0816 10:24:29.168490    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:29 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0816 10:24:29.168581    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:29 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0816 10:24:29.168594    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:29 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0816 10:24:29.192004    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:29 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0816 10:24:58.091940    4656 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 10:24:58.091955    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetMachineName
	I0816 10:24:58.092103    4656 buildroot.go:166] provisioning hostname "ha-286000-m03"
	I0816 10:24:58.092114    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetMachineName
	I0816 10:24:58.092224    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:24:58.092330    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:24:58.092419    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:24:58.092518    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:24:58.092626    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:24:58.092758    4656 main.go:141] libmachine: Using SSH client type: native
	I0816 10:24:58.092916    4656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e87ea0] 0x3e8ac00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:24:58.092925    4656 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-286000-m03 && echo "ha-286000-m03" | sudo tee /etc/hostname
	I0816 10:24:58.165459    4656 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-286000-m03
	
	I0816 10:24:58.165475    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:24:58.165609    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:24:58.165705    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:24:58.165800    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:24:58.165888    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:24:58.166012    4656 main.go:141] libmachine: Using SSH client type: native
	I0816 10:24:58.166160    4656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e87ea0] 0x3e8ac00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:24:58.166171    4656 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-286000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-286000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-286000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 10:24:58.234524    4656 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 10:24:58.234539    4656 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19461-1276/.minikube CaCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19461-1276/.minikube}
	I0816 10:24:58.234548    4656 buildroot.go:174] setting up certificates
	I0816 10:24:58.234555    4656 provision.go:84] configureAuth start
	I0816 10:24:58.234562    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetMachineName
	I0816 10:24:58.234691    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetIP
	I0816 10:24:58.234792    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:24:58.234865    4656 provision.go:143] copyHostCerts
	I0816 10:24:58.234895    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem
	I0816 10:24:58.234961    4656 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem, removing ...
	I0816 10:24:58.234967    4656 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem
	I0816 10:24:58.235111    4656 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem (1078 bytes)
	I0816 10:24:58.235314    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem
	I0816 10:24:58.235356    4656 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem, removing ...
	I0816 10:24:58.235361    4656 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem
	I0816 10:24:58.235442    4656 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem (1123 bytes)
	I0816 10:24:58.235582    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem
	I0816 10:24:58.235624    4656 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem, removing ...
	I0816 10:24:58.235629    4656 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem
	I0816 10:24:58.235704    4656 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem (1679 bytes)
	I0816 10:24:58.235845    4656 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem org=jenkins.ha-286000-m03 san=[127.0.0.1 192.169.0.7 ha-286000-m03 localhost minikube]
	I0816 10:24:58.291944    4656 provision.go:177] copyRemoteCerts
	I0816 10:24:58.291996    4656 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 10:24:58.292012    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:24:58.292152    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:24:58.292249    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:24:58.292325    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:24:58.292403    4656 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/id_rsa Username:docker}
	I0816 10:24:58.328961    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0816 10:24:58.329060    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0816 10:24:58.348824    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0816 10:24:58.348900    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0816 10:24:58.369137    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0816 10:24:58.369210    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 10:24:58.388899    4656 provision.go:87] duration metric: took 154.336521ms to configureAuth
	I0816 10:24:58.388918    4656 buildroot.go:189] setting minikube options for container-runtime
	I0816 10:24:58.389098    4656 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:24:58.389135    4656 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:24:58.389270    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:24:58.389362    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:24:58.389460    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:24:58.389543    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:24:58.389622    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:24:58.389731    4656 main.go:141] libmachine: Using SSH client type: native
	I0816 10:24:58.389859    4656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e87ea0] 0x3e8ac00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:24:58.389867    4656 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0816 10:24:58.452406    4656 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0816 10:24:58.452425    4656 buildroot.go:70] root file system type: tmpfs
	I0816 10:24:58.452504    4656 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0816 10:24:58.452516    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:24:58.452651    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:24:58.452745    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:24:58.452844    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:24:58.452943    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:24:58.453082    4656 main.go:141] libmachine: Using SSH client type: native
	I0816 10:24:58.453228    4656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e87ea0] 0x3e8ac00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:24:58.453271    4656 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0816 10:24:58.524937    4656 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0816 10:24:58.524958    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:24:58.525096    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:24:58.525191    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:24:58.525277    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:24:58.525354    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:24:58.525485    4656 main.go:141] libmachine: Using SSH client type: native
	I0816 10:24:58.525630    4656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e87ea0] 0x3e8ac00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:24:58.525643    4656 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0816 10:25:00.070144    4656 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0816 10:25:00.070159    4656 machine.go:96] duration metric: took 37.04784939s to provisionDockerMachine
	I0816 10:25:00.070167    4656 start.go:293] postStartSetup for "ha-286000-m03" (driver="hyperkit")
	I0816 10:25:00.070174    4656 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 10:25:00.070189    4656 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:25:00.070367    4656 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 10:25:00.070380    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:25:00.070472    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:25:00.070550    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:25:00.070650    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:25:00.070738    4656 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/id_rsa Username:docker}
	I0816 10:25:00.107373    4656 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 10:25:00.110616    4656 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 10:25:00.110628    4656 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19461-1276/.minikube/addons for local assets ...
	I0816 10:25:00.110727    4656 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19461-1276/.minikube/files for local assets ...
	I0816 10:25:00.110900    4656 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> 18312.pem in /etc/ssl/certs
	I0816 10:25:00.110906    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> /etc/ssl/certs/18312.pem
	I0816 10:25:00.111116    4656 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 10:25:00.118270    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem --> /etc/ssl/certs/18312.pem (1708 bytes)
	I0816 10:25:00.138002    4656 start.go:296] duration metric: took 67.828962ms for postStartSetup
	I0816 10:25:00.138023    4656 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:25:00.138205    4656 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0816 10:25:00.138223    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:25:00.138316    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:25:00.138399    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:25:00.138484    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:25:00.138558    4656 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/id_rsa Username:docker}
	I0816 10:25:00.176923    4656 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0816 10:25:00.176990    4656 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0816 10:25:00.228121    4656 fix.go:56] duration metric: took 37.358659467s for fixHost
	I0816 10:25:00.228163    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:25:00.228436    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:25:00.228658    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:25:00.228845    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:25:00.229035    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:25:00.229265    4656 main.go:141] libmachine: Using SSH client type: native
	I0816 10:25:00.229477    4656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e87ea0] 0x3e8ac00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:25:00.229490    4656 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 10:25:00.290756    4656 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723829100.434156000
	
	I0816 10:25:00.290771    4656 fix.go:216] guest clock: 1723829100.434156000
	I0816 10:25:00.290778    4656 fix.go:229] Guest: 2024-08-16 10:25:00.434156 -0700 PDT Remote: 2024-08-16 10:25:00.228148 -0700 PDT m=+88.850268934 (delta=206.008ms)
	I0816 10:25:00.290788    4656 fix.go:200] guest clock delta is within tolerance: 206.008ms
	I0816 10:25:00.290792    4656 start.go:83] releasing machines lock for "ha-286000-m03", held for 37.421364862s
	I0816 10:25:00.290808    4656 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:25:00.290938    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetIP
	I0816 10:25:00.313666    4656 out.go:177] * Found network options:
	I0816 10:25:00.334418    4656 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6
	W0816 10:25:00.355435    4656 proxy.go:119] fail to check proxy env: Error ip not in block
	W0816 10:25:00.355461    4656 proxy.go:119] fail to check proxy env: Error ip not in block
	I0816 10:25:00.355478    4656 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:25:00.356143    4656 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:25:00.356356    4656 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:25:00.356474    4656 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 10:25:00.356513    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	W0816 10:25:00.356569    4656 proxy.go:119] fail to check proxy env: Error ip not in block
	W0816 10:25:00.356590    4656 proxy.go:119] fail to check proxy env: Error ip not in block
	I0816 10:25:00.356679    4656 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0816 10:25:00.356698    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:25:00.356711    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:25:00.356905    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:25:00.356940    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:25:00.357121    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:25:00.357153    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:25:00.357335    4656 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/id_rsa Username:docker}
	I0816 10:25:00.357342    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:25:00.357519    4656 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/id_rsa Username:docker}
	W0816 10:25:00.391006    4656 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 10:25:00.391060    4656 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 10:25:00.439137    4656 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 10:25:00.439154    4656 start.go:495] detecting cgroup driver to use...
	I0816 10:25:00.439231    4656 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 10:25:00.454661    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0816 10:25:00.463185    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0816 10:25:00.471601    4656 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0816 10:25:00.471658    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0816 10:25:00.480421    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 10:25:00.488812    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0816 10:25:00.497664    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 10:25:00.506080    4656 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 10:25:00.514726    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0816 10:25:00.523293    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0816 10:25:00.531650    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0816 10:25:00.540020    4656 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 10:25:00.547503    4656 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 10:25:00.555089    4656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:25:00.643202    4656 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0816 10:25:00.663102    4656 start.go:495] detecting cgroup driver to use...
	I0816 10:25:00.663170    4656 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0816 10:25:00.680492    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 10:25:00.693170    4656 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 10:25:00.707541    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 10:25:00.718044    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 10:25:00.728609    4656 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0816 10:25:00.747431    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 10:25:00.757669    4656 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 10:25:00.772722    4656 ssh_runner.go:195] Run: which cri-dockerd
	I0816 10:25:00.775964    4656 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0816 10:25:00.783500    4656 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0816 10:25:00.797291    4656 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0816 10:25:00.889940    4656 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0816 10:25:00.996518    4656 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0816 10:25:00.996540    4656 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0816 10:25:01.010228    4656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:25:01.104164    4656 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0816 10:25:03.365849    4656 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.261743451s)
	I0816 10:25:03.365910    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0816 10:25:03.376096    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0816 10:25:03.386222    4656 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0816 10:25:03.479109    4656 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0816 10:25:03.594325    4656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:25:03.706928    4656 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0816 10:25:03.721224    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0816 10:25:03.732283    4656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:25:03.827894    4656 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0816 10:25:03.888066    4656 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0816 10:25:03.888145    4656 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0816 10:25:03.893520    4656 start.go:563] Will wait 60s for crictl version
	I0816 10:25:03.893575    4656 ssh_runner.go:195] Run: which crictl
	I0816 10:25:03.896917    4656 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 10:25:03.925631    4656 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0816 10:25:03.925712    4656 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0816 10:25:03.944598    4656 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0816 10:25:03.985082    4656 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0816 10:25:04.029274    4656 out.go:177]   - env NO_PROXY=192.169.0.5
	I0816 10:25:04.051107    4656 out.go:177]   - env NO_PROXY=192.169.0.5,192.169.0.6
	I0816 10:25:04.072084    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetIP
	I0816 10:25:04.072364    4656 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0816 10:25:04.075855    4656 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 10:25:04.085745    4656 mustload.go:65] Loading cluster: ha-286000
	I0816 10:25:04.085928    4656 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:25:04.086156    4656 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:25:04.086178    4656 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:25:04.095096    4656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52236
	I0816 10:25:04.095437    4656 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:25:04.095780    4656 main.go:141] libmachine: Using API Version  1
	I0816 10:25:04.095794    4656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:25:04.095992    4656 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:25:04.096098    4656 main.go:141] libmachine: (ha-286000) Calling .GetState
	I0816 10:25:04.096178    4656 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:25:04.096257    4656 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 4669
	I0816 10:25:04.097216    4656 host.go:66] Checking if "ha-286000" exists ...
	I0816 10:25:04.097478    4656 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:25:04.097503    4656 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:25:04.106283    4656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52238
	I0816 10:25:04.106623    4656 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:25:04.106944    4656 main.go:141] libmachine: Using API Version  1
	I0816 10:25:04.106954    4656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:25:04.107151    4656 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:25:04.107299    4656 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:25:04.107413    4656 certs.go:68] Setting up /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000 for IP: 192.169.0.7
	I0816 10:25:04.107420    4656 certs.go:194] generating shared ca certs ...
	I0816 10:25:04.107432    4656 certs.go:226] acquiring lock for ca certs: {Name:mka549fbc467849834d2267da204028c49d88a3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:25:04.107603    4656 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key
	I0816 10:25:04.107673    4656 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key
	I0816 10:25:04.107682    4656 certs.go:256] generating profile certs ...
	I0816 10:25:04.107801    4656 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.key
	I0816 10:25:04.107821    4656 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.fbb2b423
	I0816 10:25:04.107836    4656 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.fbb2b423 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.7 192.169.0.254]
	I0816 10:25:04.288936    4656 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.fbb2b423 ...
	I0816 10:25:04.288952    4656 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.fbb2b423: {Name:mk5b5d381df2e0229dfa97b94f9501ac61e1f4af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:25:04.289301    4656 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.fbb2b423 ...
	I0816 10:25:04.289309    4656 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.fbb2b423: {Name:mk1c231c3478673ccffbd14f4f0c5e31373f1228 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:25:04.289510    4656 certs.go:381] copying /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.fbb2b423 -> /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt
	I0816 10:25:04.289730    4656 certs.go:385] copying /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.fbb2b423 -> /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key
	I0816 10:25:04.289982    4656 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key
	I0816 10:25:04.289991    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0816 10:25:04.290020    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0816 10:25:04.290039    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0816 10:25:04.290058    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0816 10:25:04.290076    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0816 10:25:04.290101    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0816 10:25:04.290120    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0816 10:25:04.290144    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0816 10:25:04.290239    4656 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem (1338 bytes)
	W0816 10:25:04.290288    4656 certs.go:480] ignoring /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831_empty.pem, impossibly tiny 0 bytes
	I0816 10:25:04.290297    4656 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 10:25:04.290334    4656 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem (1078 bytes)
	I0816 10:25:04.290369    4656 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem (1123 bytes)
	I0816 10:25:04.290397    4656 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem (1679 bytes)
	I0816 10:25:04.290469    4656 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem (1708 bytes)
	I0816 10:25:04.290504    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem -> /usr/share/ca-certificates/1831.pem
	I0816 10:25:04.290530    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> /usr/share/ca-certificates/18312.pem
	I0816 10:25:04.290551    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:25:04.290581    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:25:04.290714    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:25:04.290801    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:25:04.290889    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:25:04.290979    4656 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:25:04.320175    4656 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0816 10:25:04.323948    4656 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0816 10:25:04.332572    4656 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0816 10:25:04.335881    4656 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0816 10:25:04.344208    4656 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0816 10:25:04.347261    4656 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0816 10:25:04.355353    4656 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0816 10:25:04.358754    4656 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0816 10:25:04.367226    4656 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0816 10:25:04.370644    4656 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0816 10:25:04.379014    4656 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0816 10:25:04.382464    4656 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0816 10:25:04.390940    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 10:25:04.411283    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0816 10:25:04.431206    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 10:25:04.451054    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 10:25:04.470415    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0816 10:25:04.490122    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 10:25:04.509717    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 10:25:04.529383    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 10:25:04.549154    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem --> /usr/share/ca-certificates/1831.pem (1338 bytes)
	I0816 10:25:04.568985    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem --> /usr/share/ca-certificates/18312.pem (1708 bytes)
	I0816 10:25:04.588519    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 10:25:04.607970    4656 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0816 10:25:04.621401    4656 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0816 10:25:04.635625    4656 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0816 10:25:04.649570    4656 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0816 10:25:04.663171    4656 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0816 10:25:04.676495    4656 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0816 10:25:04.690056    4656 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0816 10:25:04.703786    4656 ssh_runner.go:195] Run: openssl version
	I0816 10:25:04.707923    4656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1831.pem && ln -fs /usr/share/ca-certificates/1831.pem /etc/ssl/certs/1831.pem"
	I0816 10:25:04.716268    4656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1831.pem
	I0816 10:25:04.719659    4656 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 16:57 /usr/share/ca-certificates/1831.pem
	I0816 10:25:04.719702    4656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1831.pem
	I0816 10:25:04.723849    4656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1831.pem /etc/ssl/certs/51391683.0"
	I0816 10:25:04.732246    4656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18312.pem && ln -fs /usr/share/ca-certificates/18312.pem /etc/ssl/certs/18312.pem"
	I0816 10:25:04.740650    4656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18312.pem
	I0816 10:25:04.743948    4656 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 16:57 /usr/share/ca-certificates/18312.pem
	I0816 10:25:04.743983    4656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18312.pem
	I0816 10:25:04.748103    4656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18312.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 10:25:04.756745    4656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 10:25:04.765039    4656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:25:04.768354    4656 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 16:48 /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:25:04.768417    4656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:25:04.772556    4656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 10:25:04.781063    4656 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 10:25:04.784249    4656 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0816 10:25:04.784287    4656 kubeadm.go:934] updating node {m03 192.169.0.7 8443 v1.31.0 docker true true} ...
	I0816 10:25:04.784343    4656 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-286000-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-286000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 10:25:04.784359    4656 kube-vip.go:115] generating kube-vip config ...
	I0816 10:25:04.784396    4656 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0816 10:25:04.796986    4656 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0816 10:25:04.797028    4656 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0816 10:25:04.797080    4656 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 10:25:04.805783    4656 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0816 10:25:04.805828    4656 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0816 10:25:04.815857    4656 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0816 10:25:04.815860    4656 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256
	I0816 10:25:04.815857    4656 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256
	I0816 10:25:04.815875    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0816 10:25:04.815878    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0816 10:25:04.815911    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 10:25:04.815963    4656 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0816 10:25:04.815967    4656 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0816 10:25:04.819783    4656 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0816 10:25:04.819808    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0816 10:25:04.819886    4656 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0816 10:25:04.819905    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0816 10:25:04.838560    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0816 10:25:04.838690    4656 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0816 10:25:04.892677    4656 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0816 10:25:04.892722    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0816 10:25:05.452270    4656 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0816 10:25:05.460515    4656 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0816 10:25:05.473974    4656 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 10:25:05.487288    4656 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0816 10:25:05.501421    4656 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0816 10:25:05.504340    4656 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 10:25:05.514511    4656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:25:05.610695    4656 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 10:25:05.627113    4656 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:25:05.627365    4656 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:25:05.650018    4656 out.go:177] * Verifying Kubernetes components...
	I0816 10:25:05.671252    4656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:25:05.770878    4656 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 10:25:06.484588    4656 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19461-1276/kubeconfig
	I0816 10:25:06.484787    4656 kapi.go:59] client config for ha-286000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.key", CAFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x5540f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0816 10:25:06.484828    4656 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0816 10:25:06.484987    4656 node_ready.go:35] waiting up to 6m0s for node "ha-286000-m03" to be "Ready" ...
	I0816 10:25:06.485034    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:06.485039    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:06.485045    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:06.485048    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:06.487783    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:06.985311    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:06.985336    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:06.985348    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:06.985354    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:06.989349    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:07.485490    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:07.485513    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:07.485524    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:07.485529    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:07.489016    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:07.985178    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:07.985193    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:07.985199    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:07.985202    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:07.987679    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:08.487278    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:08.487300    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:08.487309    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:08.487315    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:08.491486    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:25:08.491567    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:08.987160    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:08.987184    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:08.987194    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:08.987200    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:08.990942    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:09.485053    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:09.485101    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:09.485109    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:09.485113    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:09.487562    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:09.985592    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:09.985671    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:09.985687    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:09.985696    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:09.989637    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:10.486025    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:10.486050    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:10.486061    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:10.486067    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:10.489557    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:10.985086    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:10.985127    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:10.985134    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:10.985139    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:10.987914    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:10.987975    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:11.485153    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:11.485176    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:11.485186    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:11.485193    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:11.488752    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:11.986139    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:11.986154    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:11.986162    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:11.986166    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:11.989386    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:12.485803    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:12.485849    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:12.485865    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:12.485870    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:12.489472    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:12.986570    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:12.986596    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:12.986607    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:12.986612    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:12.990236    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:12.990376    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:13.484914    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:13.484926    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:13.484932    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:13.484935    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:13.488977    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:25:13.986680    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:13.986696    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:13.986702    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:13.986705    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:13.989158    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:14.486321    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:14.486382    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:14.486402    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:14.486412    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:14.491203    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:25:14.985877    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:14.985901    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:14.985912    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:14.985949    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:14.989703    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:15.485277    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:15.485292    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:15.485299    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:15.485302    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:15.487768    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:15.487830    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:15.985642    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:15.985663    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:15.985675    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:15.985680    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:15.989433    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:16.484901    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:16.484927    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:16.484939    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:16.484944    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:16.488779    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:16.986034    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:16.986047    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:16.986054    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:16.986062    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:16.988709    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:17.486864    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:17.486887    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:17.486924    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:17.486931    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:17.490473    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:17.490551    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:17.985889    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:17.985909    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:17.985921    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:17.985925    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:17.989836    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:18.485398    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:18.485414    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:18.485421    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:18.485425    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:18.487889    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:18.985349    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:18.985378    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:18.985436    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:18.985442    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:18.988422    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:19.485081    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:19.485102    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:19.485113    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:19.485121    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:19.488852    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:19.985049    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:19.985062    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:19.985081    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:19.985085    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:19.987210    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:19.987270    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:20.484914    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:20.484939    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:20.484949    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:20.484954    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:20.488695    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:20.985203    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:20.985229    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:20.985239    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:20.985245    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:20.989283    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:25:21.484963    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:21.484979    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:21.484985    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:21.484989    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:21.487275    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:21.985755    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:21.985782    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:21.985793    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:21.985798    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:21.989914    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:25:21.989997    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:22.485717    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:22.485745    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:22.485824    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:22.485835    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:22.489667    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:22.985286    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:22.985301    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:22.985307    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:22.985318    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:22.987903    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:23.485546    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:23.485567    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:23.485578    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:23.485583    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:23.489380    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:23.985686    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:23.985757    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:23.985777    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:23.985792    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:23.989466    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:24.484557    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:24.484568    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:24.484575    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:24.484578    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:24.487089    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:24.487151    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:24.985579    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:24.985600    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:24.985609    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:24.985614    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:24.989536    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:25.485541    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:25.485564    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:25.485576    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:25.485583    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:25.489272    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:25.984513    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:25.984529    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:25.984536    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:25.984540    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:25.987095    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:26.486003    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:26.486022    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:26.486034    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:26.486043    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:26.489357    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:26.489445    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:26.985326    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:26.985345    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:26.985357    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:26.985363    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:26.988993    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:27.484603    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:27.484616    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:27.484621    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:27.484625    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:27.486943    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:27.984825    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:27.984844    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:27.984855    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:27.984861    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:27.988691    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:28.486230    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:28.486245    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:28.486253    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:28.486259    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:28.491735    4656 round_trippers.go:574] Response Status: 404 Not Found in 5 milliseconds
	I0816 10:25:28.491792    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:28.985268    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:28.985287    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:28.985315    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:28.985319    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:28.987718    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:29.485335    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:29.485355    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:29.485367    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:29.485372    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:29.488781    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:29.984712    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:29.984727    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:29.984736    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:29.984740    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:29.987128    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:30.484437    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:30.484448    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:30.484454    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:30.484457    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:30.487047    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:30.984627    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:30.984648    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:30.984659    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:30.984665    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:30.988084    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:30.988236    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:31.486364    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:31.486416    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:31.486431    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:31.486464    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:31.489760    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:31.985027    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:31.985041    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:31.985048    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:31.985052    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:31.987323    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:32.486368    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:32.486394    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:32.486407    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:32.486413    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:32.490571    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:25:32.984941    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:32.984966    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:32.984978    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:32.984984    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:32.988672    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:32.988757    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:33.484801    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:33.484813    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:33.484818    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:33.484823    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:33.487037    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:33.985797    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:33.985821    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:33.985834    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:33.985843    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:33.989368    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:34.484289    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:34.484304    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:34.484313    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:34.484318    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:34.486642    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:34.985159    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:34.985174    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:34.985181    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:34.985184    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:34.987765    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:35.484974    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:35.484995    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:35.485006    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:35.485012    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:35.488175    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:35.488288    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:35.984879    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:35.984901    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:35.984913    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:35.984918    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:35.988822    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:36.485651    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:36.485664    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:36.485671    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:36.485673    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:36.488116    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:36.985565    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:36.985584    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:36.985595    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:36.985601    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:36.989216    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:37.485779    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:37.485862    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:37.485877    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:37.485882    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:37.489350    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:37.489427    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:37.984128    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:37.984140    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:37.984146    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:37.984150    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:37.986646    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:38.485023    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:38.485039    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:38.485048    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:38.485052    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:38.487768    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:38.984183    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:38.984206    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:38.984261    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:38.984269    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:38.987325    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:39.485275    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:39.485321    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:39.485334    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:39.485338    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:39.487742    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:39.985699    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:39.985718    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:39.985729    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:39.985737    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:39.988773    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:39.988844    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:40.484531    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:40.484546    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:40.484554    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:40.484559    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:40.487018    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:40.985498    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:40.985513    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:40.985520    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:40.985524    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:40.987999    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:41.484329    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:41.484342    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:41.484347    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:41.484351    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:41.486849    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:41.984847    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:41.984871    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:41.984882    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:41.984889    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:41.988357    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:42.484908    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:42.484921    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:42.484927    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:42.484931    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:42.487626    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:42.487688    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:42.985273    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:42.985299    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:42.985311    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:42.985325    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:42.988684    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:43.485086    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:43.485111    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:43.485128    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:43.485134    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:43.488939    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:43.983910    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:43.983926    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:43.983933    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:43.983936    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:43.986292    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:44.484259    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:44.484279    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:44.484291    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:44.484328    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:44.486962    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:44.984437    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:44.984457    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:44.984467    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:44.984475    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:44.987835    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:44.987961    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:45.484938    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:45.484953    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:45.484961    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:45.484964    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:45.487461    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:45.985086    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:45.985109    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:45.985119    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:45.985124    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:45.988699    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:46.484276    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:46.484299    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:46.484311    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:46.484319    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:46.488509    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:25:46.983907    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:46.983920    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:46.983926    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:46.983929    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:46.986359    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:47.485117    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:47.485136    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:47.485145    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:47.485150    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:47.487992    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:47.488052    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:47.984816    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:47.984870    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:47.984882    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:47.984891    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:47.988129    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:48.483883    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:48.483900    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:48.483906    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:48.483911    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:48.486198    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:48.984169    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:48.984190    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:48.984203    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:48.984208    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:48.987942    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:49.484903    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:49.484919    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:49.484927    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:49.484933    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:49.487106    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:49.984353    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:49.984369    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:49.984375    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:49.984378    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:49.987041    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:49.987105    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:50.485525    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:50.485573    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:50.485599    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:50.485608    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:50.489590    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:50.983824    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:50.983847    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:50.983858    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:50.983864    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:50.987088    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:51.484527    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:51.484553    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:51.484560    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:51.484563    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:51.489758    4656 round_trippers.go:574] Response Status: 404 Not Found in 5 milliseconds
	I0816 10:25:51.984190    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:51.984202    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:51.984208    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:51.984212    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:51.986039    4656 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0816 10:25:52.484065    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:52.484112    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:52.484125    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:52.484132    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:52.487172    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:52.487316    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:52.984150    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:52.984166    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:52.984173    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:52.984175    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:52.986345    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:53.484269    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:53.484284    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:53.484293    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:53.484296    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:53.486726    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:53.985717    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:53.985742    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:53.985759    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:53.985765    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:53.989726    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:54.484319    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:54.484335    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:54.484342    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:54.484345    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:54.486811    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:54.984778    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:54.984800    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:54.984808    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:54.984812    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:54.987368    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:54.987445    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:55.484244    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:55.484267    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:55.484278    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:55.484286    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:55.488016    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:55.985068    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:55.985083    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:55.985090    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:55.985093    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:55.987495    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:56.484782    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:56.484807    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:56.484819    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:56.484826    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:56.488310    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:56.984397    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:56.984419    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:56.984431    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:56.984439    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:56.988216    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:56.988289    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:57.483589    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:57.483605    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:57.483611    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:57.483616    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:57.486165    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:57.985574    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:57.985599    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:57.985611    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:57.985616    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:57.989363    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:58.484270    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:58.484308    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:58.484320    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:58.484325    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:58.487918    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:58.983666    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:58.983682    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:58.983689    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:58.983697    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:58.985851    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:59.483521    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:59.483543    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:59.483554    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:59.483560    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:59.487399    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:59.487469    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:59.984232    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:59.984247    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:59.984255    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:59.984260    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:59.986963    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:00.483820    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:00.483833    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:00.483839    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:00.483842    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:00.486243    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:00.983904    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:00.983929    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:00.983941    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:00.983945    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:00.988101    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:26:01.484375    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:01.484399    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:01.484411    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:01.484448    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:01.488415    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:01.488502    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:01.983385    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:01.983401    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:01.983408    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:01.983411    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:01.985938    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:02.483425    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:02.483445    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:02.483457    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:02.483465    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:02.487166    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:02.984027    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:02.984094    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:02.984108    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:02.984117    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:02.987822    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:03.483320    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:03.483335    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:03.483341    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:03.483344    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:03.485639    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:03.985036    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:03.985059    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:03.985073    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:03.985077    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:03.988791    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:03.988858    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:04.483621    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:04.483639    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:04.483651    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:04.483658    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:04.487066    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:04.983859    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:04.983875    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:04.983882    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:04.983886    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:04.986493    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:05.483389    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:05.483408    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:05.483418    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:05.483422    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:05.486586    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:05.984366    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:05.984385    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:05.984397    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:05.984404    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:05.988161    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:06.483211    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:06.483226    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:06.483232    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:06.483235    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:06.485660    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:06.485720    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:06.983347    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:06.983366    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:06.983377    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:06.983386    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:06.986526    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:07.484090    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:07.484111    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:07.484123    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:07.484128    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:07.488198    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:26:07.983724    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:07.983740    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:07.983747    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:07.983750    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:07.986537    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:08.484146    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:08.484166    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:08.484178    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:08.484183    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:08.487983    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:08.488057    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:08.984192    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:08.984213    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:08.984224    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:08.984229    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:08.988294    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:26:09.484029    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:09.484043    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:09.484049    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:09.484052    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:09.486705    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:09.985246    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:09.985271    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:09.985283    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:09.985288    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:09.989175    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:10.483317    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:10.483343    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:10.483354    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:10.483360    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:10.486962    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:10.983808    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:10.983827    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:10.983852    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:10.983857    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:10.986240    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:10.986308    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:11.483336    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:11.483358    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:11.483369    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:11.483379    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:11.486931    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:11.984519    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:11.984661    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:11.984687    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:11.984697    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:11.988638    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:12.484861    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:12.484877    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:12.484886    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:12.484889    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:12.487390    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:12.983427    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:12.983451    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:12.983463    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:12.983469    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:12.986694    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:12.986771    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:13.484765    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:13.484792    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:13.484805    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:13.484811    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:13.488619    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:13.983338    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:13.983352    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:13.983393    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:13.983399    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:13.985734    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:14.483998    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:14.484020    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:14.484032    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:14.484040    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:14.487538    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:14.984976    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:14.985003    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:14.985019    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:14.985025    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:14.988674    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:14.988745    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:15.483186    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:15.483201    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:15.483208    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:15.483212    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:15.485667    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:15.983775    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:15.983787    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:15.983794    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:15.983798    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:15.986102    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:16.483426    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:16.483449    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:16.483465    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:16.483473    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:16.487194    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:16.983030    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:16.983043    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:16.983049    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:16.983053    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:16.986507    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:17.484904    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:17.484932    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:17.484944    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:17.484951    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:17.488809    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:17.488909    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:17.983661    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:17.983682    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:17.983691    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:17.983700    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:17.987560    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:18.483005    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:18.483019    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:18.483043    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:18.483047    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:18.485247    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:18.982837    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:18.982858    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:18.982870    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:18.982877    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:18.986275    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:19.484274    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:19.484305    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:19.484343    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:19.484351    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:19.488293    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:19.983892    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:19.983907    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:19.983913    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:19.983917    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:19.986273    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:19.986330    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:20.483798    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:20.483825    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:20.483837    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:20.483843    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:20.487687    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:20.983298    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:20.983317    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:20.983329    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:20.983341    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:20.986753    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:21.483677    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:21.483697    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:21.483720    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:21.483722    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:21.486177    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:21.983903    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:21.983922    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:21.983934    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:21.983940    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:21.986911    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:21.986973    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:22.484112    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:22.484134    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:22.484147    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:22.484152    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:22.488262    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:26:22.983975    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:22.984028    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:22.984035    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:22.984039    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:22.986443    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:23.483009    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:23.483033    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:23.483050    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:23.483066    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:23.486675    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:23.983451    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:23.983483    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:23.983500    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:23.983511    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:23.987001    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:23.987063    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:24.483488    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:24.483536    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:24.483547    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:24.483551    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:24.485853    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:24.982731    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:24.982743    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:24.982750    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:24.982753    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:24.984789    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:25.483610    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:25.483630    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:25.483639    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:25.483645    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:25.487060    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:25.982597    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:25.982610    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:25.982622    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:25.982626    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:25.994285    4656 round_trippers.go:574] Response Status: 404 Not Found in 11 milliseconds
	I0816 10:26:25.994342    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:26.483108    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:26.483129    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:26.483141    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:26.483147    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:26.486703    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:26.984543    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:26.984561    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:26.984570    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:26.984574    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:26.987295    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:27.484057    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:27.484070    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:27.484076    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:27.484079    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:27.486438    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:27.982568    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:27.982579    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:27.982586    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:27.982589    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:27.984714    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:28.482928    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:28.482954    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:28.482966    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:28.482971    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:28.486982    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:28.487049    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:28.983984    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:28.984000    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:28.984007    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:28.984010    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:28.986187    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:29.482503    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:29.482527    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:29.482539    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:29.482545    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:29.485679    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:29.982668    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:29.982688    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:29.982700    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:29.982707    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:29.986106    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:30.483014    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:30.483035    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:30.483044    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:30.483048    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:30.485517    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:30.984509    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:30.984533    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:30.984544    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:30.984596    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:30.988289    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:30.988408    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:31.483916    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:31.483943    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:31.483981    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:31.483990    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:31.487890    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:31.982923    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:31.982943    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:31.982952    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:31.982956    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:31.985708    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:32.483569    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:32.483593    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:32.483605    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:32.483616    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:32.487327    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:32.982635    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:32.982661    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:32.982673    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:32.982679    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:32.986374    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:33.482846    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:33.482858    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:33.482872    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:33.482882    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:33.485277    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:33.485339    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:33.982793    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:33.982819    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:33.982831    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:33.982836    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:33.986153    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:34.482560    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:34.482578    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:34.482604    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:34.482610    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:34.486015    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:34.982428    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:34.982450    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:34.982463    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:34.982469    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:34.985873    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:35.483727    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:35.483740    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:35.483747    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:35.483751    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:35.485833    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:35.485894    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:35.982916    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:35.982943    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:35.982955    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:35.982965    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:35.986742    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:36.483103    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:36.483123    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:36.483132    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:36.483135    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:36.485868    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:36.982704    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:36.982762    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:36.982776    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:36.982790    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:36.986222    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:37.483468    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:37.483488    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:37.483500    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:37.483506    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:37.487244    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:37.487314    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:37.983372    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:37.983388    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:37.983394    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:37.983397    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:37.985922    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:38.483160    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:38.483179    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:38.483191    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:38.483199    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:38.486492    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:38.982468    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:38.982483    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:38.982489    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:38.982493    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:38.984866    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:39.482442    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:39.482495    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:39.482503    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:39.482507    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:39.484936    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:39.982412    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:39.982432    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:39.982441    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:39.982450    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:39.986230    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:39.986305    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:40.483055    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:40.483077    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:40.483087    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:40.483096    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:40.486444    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:40.983022    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:40.983056    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:40.983064    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:40.983068    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:40.985224    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:41.482184    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:41.482204    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:41.482215    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:41.482220    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:41.485468    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:41.983203    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:41.983227    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:41.983306    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:41.983320    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:41.987091    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:41.987171    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:42.483067    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:42.483083    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:42.483092    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:42.483096    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:42.485854    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:42.982325    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:42.982346    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:42.982358    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:42.982367    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:42.985247    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:43.482212    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:43.482232    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:43.482245    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:43.482253    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:43.485500    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:43.982210    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:43.982226    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:43.982232    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:43.982235    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:43.984789    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:44.483719    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:44.483739    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:44.483750    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:44.483758    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:44.487463    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:44.487539    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:44.984070    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:44.984094    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:44.984106    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:44.984112    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:44.987930    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:45.483159    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:45.483174    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:45.483183    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:45.483188    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:45.485689    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:45.982348    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:45.982376    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:45.982441    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:45.982451    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:45.986431    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:46.483035    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:46.483061    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:46.483073    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:46.483079    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:46.487152    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:26:46.982639    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:46.982696    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:46.982710    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:46.982717    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:46.986259    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:46.986315    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:47.482155    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:47.482188    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:47.482237    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:47.482249    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:47.485627    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:47.983982    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:47.984007    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:47.984020    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:47.984026    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:47.988122    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:26:48.482121    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:48.482168    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:48.482175    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:48.482179    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:48.484595    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:48.983532    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:48.983556    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:48.983569    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:48.983574    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:48.987409    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:48.987484    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:49.483718    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:49.483736    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:49.483748    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:49.483754    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:49.487115    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:49.982660    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:49.982682    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:49.982692    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:49.982696    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:49.985469    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:50.481995    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:50.482014    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:50.482032    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:50.482058    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:50.485582    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:50.981809    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:50.981828    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:50.981835    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:50.981839    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:50.984238    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:51.482206    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:51.482226    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:51.482236    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:51.482241    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:51.485102    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:51.485201    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:51.983488    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:51.983503    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:51.983512    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:51.983516    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:51.986249    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:52.482268    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:52.482293    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:52.482304    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:52.482311    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:52.485931    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:52.983543    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:52.983556    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:52.983562    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:52.983564    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:52.987568    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:53.482529    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:53.482553    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:53.482590    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:53.482612    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:53.486396    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:53.486481    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:53.983382    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:53.983409    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:53.983421    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:53.983426    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:53.987647    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:26:54.482288    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:54.482367    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:54.482378    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:54.482383    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:54.484925    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:54.983458    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:54.983478    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:54.983490    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:54.983497    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:54.987016    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:55.482017    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:55.482037    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:55.482048    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:55.482054    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:55.485201    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:55.983339    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:55.983353    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:55.983360    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:55.983377    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:55.985849    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:55.985910    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:56.483753    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:56.483779    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:56.483792    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:56.483798    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:56.487683    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:56.983682    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:56.983735    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:56.983749    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:56.983758    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:56.987724    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:57.481708    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:57.481724    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:57.481730    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:57.481733    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:57.483972    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:57.983723    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:57.983751    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:57.983772    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:57.983782    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:57.987662    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:57.987781    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:58.481946    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:58.481978    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:58.481989    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:58.481998    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:58.485616    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:58.982478    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:58.982494    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:58.982501    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:58.982503    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:58.984797    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:59.482635    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:59.482661    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:59.482672    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:59.482678    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:59.486199    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:59.983080    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:59.983108    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:59.983179    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:59.983189    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:59.986765    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:00.481883    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:00.481904    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:00.481916    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:00.481923    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:00.485164    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:00.485241    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:00.983581    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:00.983606    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:00.983618    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:00.983626    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:00.987095    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:01.481499    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:01.481518    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:01.481530    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:01.481536    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:01.484541    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:01.981949    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:01.981971    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:01.981980    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:01.981985    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:01.984730    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:02.483014    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:02.483039    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:02.483050    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:02.483057    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:02.486856    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:02.486952    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:02.982039    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:02.982061    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:02.982075    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:02.982083    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:02.986009    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:03.482044    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:03.482058    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:03.482064    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:03.482068    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:03.484293    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:03.982493    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:03.982521    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:03.982589    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:03.982599    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:03.986547    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:04.481423    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:04.481443    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:04.481481    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:04.481492    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:04.484534    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:04.981631    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:04.981650    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:04.981659    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:04.981665    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:04.984478    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:04.984535    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:05.481850    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:05.481876    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:05.481888    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:05.481895    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:05.485885    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:05.983485    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:05.983508    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:05.983520    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:05.983529    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:05.987747    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:27:06.481638    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:06.481654    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:06.481660    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:06.481666    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:06.483910    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:06.982417    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:06.982443    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:06.982456    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:06.982461    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:06.986711    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:27:06.986836    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:07.482901    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:07.482925    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:07.482937    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:07.482944    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:07.486790    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:07.981354    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:07.981370    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:07.981376    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:07.981380    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:07.984233    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:08.482884    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:08.482907    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:08.482918    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:08.482923    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:08.486675    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:08.983285    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:08.983308    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:08.983320    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:08.983362    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:08.987075    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:08.987178    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:09.481582    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:09.481596    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:09.481602    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:09.481615    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:09.484345    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:09.982946    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:09.982968    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:09.982980    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:09.982987    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:09.987241    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:27:10.482214    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:10.482233    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:10.482245    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:10.482250    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:10.485342    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:10.981598    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:10.981613    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:10.981647    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:10.981651    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:10.983798    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:11.481915    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:11.481938    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:11.481949    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:11.481956    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:11.485887    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:11.485960    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:11.982040    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:11.982065    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:11.982077    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:11.982085    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:11.985843    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:12.481119    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:12.481134    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:12.481140    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:12.481144    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:12.483753    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:12.983314    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:12.983335    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:12.983348    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:12.983354    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:12.987658    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:27:13.483200    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:13.483225    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:13.483237    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:13.483242    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:13.487000    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:13.487075    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:13.981082    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:13.981098    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:13.981104    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:13.981107    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:13.983666    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:14.481510    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:14.481533    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:14.481546    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:14.481553    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:14.485493    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:14.982587    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:14.982611    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:14.982623    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:14.982632    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:14.986953    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:27:15.481989    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:15.482002    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:15.482008    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:15.482011    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:15.484306    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:15.983142    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:15.983197    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:15.983212    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:15.983220    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:15.987145    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:15.987217    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:16.482640    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:16.482663    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:16.482676    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:16.482682    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:16.486588    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:16.982739    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:16.982758    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:16.982767    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:16.982771    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:16.985870    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:17.482222    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:17.482247    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:17.482259    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:17.482264    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:17.486553    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:27:17.982295    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:17.982319    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:17.982345    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:17.982355    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:17.986295    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:18.481466    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:18.481480    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:18.481501    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:18.481505    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:18.484182    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:18.484250    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:18.981829    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:18.981869    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:18.981879    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:18.981887    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:18.984310    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:19.481304    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:19.481354    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:19.481368    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:19.481374    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:19.485047    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:19.981003    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:19.981016    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:19.981022    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:19.981026    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:19.983258    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:20.482082    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:20.482099    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:20.482107    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:20.482110    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:20.484774    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:20.484831    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:20.982149    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:20.982161    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:20.982167    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:20.982171    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:20.984491    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:21.482759    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:21.482774    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:21.482784    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:21.482805    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:21.488307    4656 round_trippers.go:574] Response Status: 404 Not Found in 5 milliseconds
	I0816 10:27:21.980923    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:21.980944    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:21.980956    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:21.980962    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:21.985236    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:27:22.480954    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:22.480982    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:22.481000    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:22.481007    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:22.484623    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:22.982155    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:22.982170    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:22.982177    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:22.982183    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:22.985131    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:22.985233    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:23.481447    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:23.481473    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:23.481485    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:23.481493    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:23.485171    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:23.980807    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:23.980841    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:23.980854    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:23.980886    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:23.984726    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:24.481009    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:24.481023    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:24.481030    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:24.481033    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:24.483629    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:24.981780    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:24.981800    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:24.981812    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:24.981817    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:24.985032    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:25.482336    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:25.482370    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:25.482430    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:25.482437    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:25.486196    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:25.486271    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:25.981022    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:25.981035    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:25.981041    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:25.981048    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:25.983833    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:26.481578    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:26.481603    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:26.481614    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:26.481620    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:26.485938    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:27:26.981068    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:26.981108    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:26.981117    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:26.981122    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:26.983762    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:27.481705    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:27.481739    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:27.481747    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:27.481751    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:27.484193    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:27.981754    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:27.981779    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:27.981791    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:27.981804    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:27.985583    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:27.985651    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:28.481144    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:28.481173    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:28.481209    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:28.481216    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:28.484725    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:28.981756    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:28.981769    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:28.981776    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:28.981779    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:28.984303    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:29.481471    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:29.481547    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:29.481562    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:29.481571    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:29.484980    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:29.981350    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:29.981376    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:29.981388    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:29.981394    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:29.985134    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:30.481784    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:30.481800    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:30.481807    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:30.481810    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:30.483987    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:30.484040    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:30.981042    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:30.981064    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:30.981075    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:30.981082    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:30.985035    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:31.480553    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:31.480568    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:31.480576    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:31.480580    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:31.483746    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:31.981346    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:31.981362    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:31.981368    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:31.981372    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:31.983579    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:32.481011    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:32.481036    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:32.481048    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:32.481054    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:32.484005    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:32.484066    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:32.980838    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:32.980858    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:32.980869    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:32.980876    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:32.984769    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:33.481797    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:33.481813    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:33.481819    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:33.481822    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:33.484075    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:33.980538    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:33.980569    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:33.980581    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:33.980586    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:33.984292    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:34.480611    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:34.480633    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:34.480644    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:34.480650    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:34.484424    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:34.484495    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:34.980662    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:34.980675    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:34.980685    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:34.980688    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:34.983333    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:35.481072    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:35.481093    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:35.481104    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:35.481109    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:35.484858    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:35.980573    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:35.980600    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:35.980613    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:35.980619    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:35.984318    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:36.481723    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:36.481742    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:36.481750    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:36.481755    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:36.484525    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:36.484582    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:36.981468    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:36.981491    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:36.981534    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:36.981541    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:36.985480    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:37.481087    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:37.481115    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:37.481127    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:37.481133    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:37.484349    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:37.981606    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:37.981618    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:37.981624    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:37.981628    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:37.984174    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:38.480919    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:38.480942    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:38.480954    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:38.480960    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:38.484462    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:38.484530    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:38.980883    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:38.980958    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:38.980971    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:38.980976    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:38.985426    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:27:39.480691    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:39.480705    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:39.480711    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:39.480714    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:39.483370    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:39.980523    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:39.980543    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:39.980554    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:39.980559    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:39.983705    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:40.480857    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:40.480870    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:40.480876    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:40.480880    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:40.483015    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:40.980527    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:40.980547    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:40.980559    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:40.980566    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:40.984425    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:40.984557    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:41.480215    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:41.480250    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:41.480259    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:41.480264    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:41.482681    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:41.980221    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:41.980233    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:41.980238    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:41.980241    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:41.983101    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:42.481763    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:42.481782    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:42.481794    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:42.481801    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:42.484939    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:42.981092    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:42.981114    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:42.981125    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:42.981131    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:42.985191    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:27:42.985282    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:43.481456    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:43.481481    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:43.481493    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:43.481498    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:43.485020    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:43.981686    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:43.981734    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:43.981742    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:43.981745    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:43.984138    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:44.480895    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:44.480921    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:44.480934    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:44.480940    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:44.484864    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:44.980350    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:44.980376    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:44.980388    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:44.980394    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:44.984559    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:27:45.480493    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:45.480509    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:45.480518    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:45.480523    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:45.483088    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:45.483193    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:45.981740    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:45.981766    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:45.981778    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:45.981787    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:45.985812    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:27:46.480744    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:46.480771    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:46.480782    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:46.480788    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:46.484433    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:46.980028    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:46.980044    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:46.980052    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:46.980058    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:46.982468    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:47.480811    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:47.480834    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:47.480846    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:47.480854    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:47.484154    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:47.484225    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:47.981495    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:47.981558    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:47.981573    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:47.981579    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:47.984852    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:48.481331    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:48.481350    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:48.481357    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:48.481360    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:48.483672    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:48.981308    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:48.981334    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:48.981345    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:48.981351    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:48.987316    4656 round_trippers.go:574] Response Status: 404 Not Found in 5 milliseconds
	I0816 10:27:49.480610    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:49.480631    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:49.480642    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:49.480650    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:49.484493    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:49.484576    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:49.980270    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:49.980291    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:49.980303    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:49.980311    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:49.983514    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:50.480630    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:50.480652    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:50.480663    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:50.480672    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:50.484716    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:27:50.980998    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:50.981031    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:50.981079    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:50.981089    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:50.984717    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:51.481764    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:51.481781    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:51.481788    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:51.481792    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:51.483882    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:51.981147    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:51.981167    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:51.981178    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:51.981185    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:51.984837    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:51.984916    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:52.480088    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:52.480109    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:52.480121    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:52.480126    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:52.483987    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:52.980987    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:52.981013    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:52.981029    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:52.981059    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:52.984581    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:53.480043    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:53.480063    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:53.480084    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:53.480092    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:53.483664    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:53.980634    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:53.980693    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:53.980706    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:53.980711    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:53.984482    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:54.480029    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:54.480042    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:54.480051    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:54.480056    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:54.482803    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:54.482872    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:54.980002    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:54.980026    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:54.980038    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:54.980043    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:54.983690    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:55.480147    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:55.480213    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:55.480241    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:55.480251    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:55.484002    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:55.980804    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:55.980819    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:55.980825    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:55.980828    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:55.982902    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:56.480975    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:56.480997    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:56.481006    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:56.481011    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:56.484989    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:56.485061    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:56.980849    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:56.980870    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:56.980880    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:56.980888    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:56.984648    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:57.479708    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:57.479723    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:57.479732    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:57.479736    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:57.482298    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:57.979711    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:57.979729    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:57.979741    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:57.979746    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:57.983031    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:58.481734    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:58.481790    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:58.481805    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:58.481814    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:58.486010    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:27:58.486113    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:58.980860    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:58.980917    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:58.980929    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:58.980937    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:58.984281    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:59.480008    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:59.480075    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:59.480090    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:59.480100    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:59.483377    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:59.981599    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:59.981621    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:59.981633    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:59.981639    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:59.985606    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:00.480770    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:00.480786    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:00.480795    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:00.480798    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:00.483310    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:28:00.980781    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:00.980807    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:00.980817    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:00.980824    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:00.984773    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:00.984872    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:01.480191    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:01.480210    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:01.480218    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:01.480222    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:01.482706    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:28:01.979918    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:01.979940    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:01.979950    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:01.979955    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:01.982361    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:28:02.481286    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:02.481302    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:02.481308    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:02.481311    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:02.483655    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:28:02.980572    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:02.980632    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:02.980646    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:02.980655    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:02.984337    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:03.479541    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:03.479553    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:03.479560    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:03.479562    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:03.482043    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:28:03.482109    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:03.980816    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:03.980840    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:03.980877    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:03.980906    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:03.984861    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:04.481240    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:04.481266    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:04.481276    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:04.481282    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:04.485558    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:28:04.981353    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:04.981413    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:04.981429    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:04.981438    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:04.984812    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:05.480489    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:05.480511    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:05.480523    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:05.480528    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:05.484058    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:05.484144    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:05.979456    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:05.979471    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:05.979480    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:05.979485    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:05.981941    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:28:06.480803    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:06.480823    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:06.480834    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:06.480841    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:06.483869    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:06.980347    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:06.980368    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:06.980379    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:06.980384    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:06.983544    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:07.479393    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:07.479421    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:07.479481    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:07.479491    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:07.483249    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:07.979964    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:07.979979    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:07.979985    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:07.979988    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:07.983187    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:07.983251    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:08.479456    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:08.479474    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:08.479486    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:08.479493    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:08.483132    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:08.980053    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:08.980073    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:08.980083    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:08.980090    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:08.983933    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:09.481215    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:09.481229    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:09.481237    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:09.481242    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:09.483856    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:28:09.980082    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:09.980109    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:09.980121    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:09.980129    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:09.983657    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:09.983727    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:10.481137    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:10.481162    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:10.481171    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:10.481178    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:10.485023    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:10.979382    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:10.979406    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:10.979418    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:10.979425    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:10.982616    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:11.480878    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:11.480900    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:11.480924    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:11.480931    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:11.484400    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:11.980148    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:11.980201    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:11.980213    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:11.980220    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:11.983261    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:12.479546    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:12.479558    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:12.479564    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:12.479568    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:12.482006    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:28:12.482066    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:12.980407    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:12.980433    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:12.980446    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:12.980455    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:12.984259    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:13.481285    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:13.481304    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:13.481316    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:13.481321    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:13.484864    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:13.980948    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:13.980967    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:13.981024    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:13.981032    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:13.983792    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:28:14.480529    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:14.480592    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:14.480607    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:14.480615    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:14.485369    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:28:14.485425    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:14.980508    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:14.980528    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:14.980540    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:14.980546    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:14.984308    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:15.479351    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:15.479366    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:15.479375    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:15.479378    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:15.482333    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:28:15.979273    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:15.979296    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:15.979308    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:15.979317    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:15.983036    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:16.480267    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:16.480288    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:16.480300    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:16.480306    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:16.484104    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:16.979260    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:16.979282    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:16.979294    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:16.979302    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:16.983145    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:16.983218    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:17.479986    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:17.480012    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:17.480023    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:17.480031    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:17.483621    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:17.980230    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:17.980255    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:17.980267    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:17.980273    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:17.983388    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:18.479428    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:18.479444    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:18.479452    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:18.479457    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:18.482401    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:28:18.980054    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:18.980078    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:18.980090    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:18.980111    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:18.984291    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:28:18.984384    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:19.479204    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:19.479223    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:19.479235    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:19.479241    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:19.482609    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:19.980334    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:19.980358    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:19.980370    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:19.980376    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:19.984055    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:20.479678    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:20.479704    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:20.479716    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:20.479722    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:20.483940    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:28:20.980207    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:20.980232    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:20.980243    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:20.980248    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:20.984073    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:21.479009    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:21.479028    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:21.479039    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:21.479045    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:21.482870    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:21.482946    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:21.979028    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:21.979048    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:21.979060    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:21.979067    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:21.982782    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:22.480202    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:22.480229    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:22.480242    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:22.480248    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:22.484332    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:28:22.979809    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:22.979829    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:22.979861    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:22.979867    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:22.982210    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:28:23.480520    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:23.480541    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:23.480556    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:23.480564    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:23.484344    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:23.484415    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:23.978872    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:23.978890    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:23.978939    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:23.978947    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:23.981588    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:28:24.478940    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:24.479024    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:24.479038    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:24.479046    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:24.482719    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:24.980016    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:24.980040    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:24.980053    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:24.980061    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:24.984315    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:28:25.478940    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:25.478960    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:25.478971    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:25.478978    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:25.483052    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:28:25.979269    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:25.979296    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:25.979308    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:25.979314    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:25.983114    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:25.983257    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:26.479781    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:26.479806    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:26.479817    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:26.479828    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:26.483419    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:26.979605    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:26.979626    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:26.979637    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:26.979644    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:26.982753    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:27.479413    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:27.479438    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:27.479450    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:27.479458    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:27.483110    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:27.980825    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:27.980852    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:27.980863    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:27.980870    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:27.984767    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:27.984839    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:28.479839    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:28.479867    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:28.479880    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:28.479888    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:28.483764    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:28.978775    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:28.978797    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:28.978808    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:28.978815    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:28.982911    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:28:29.480812    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:29.480838    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:29.480848    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:29.480854    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:29.484272    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:29.980179    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:29.980196    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:29.980204    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:29.980208    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:29.983010    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:28:30.479018    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:30.479037    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:30.479056    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:30.479060    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:30.480976    4656 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0816 10:28:30.481040    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:30.979780    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:30.979800    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:30.979810    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:30.979815    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:30.983686    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:31.479047    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:31.479069    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:31.479081    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:31.479088    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:31.482916    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:31.979327    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:31.979383    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:31.979396    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:31.979406    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:31.982781    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:32.479680    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:32.479701    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:32.479712    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:32.479718    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:32.483452    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:32.483551    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:32.979627    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:32.979653    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:32.979665    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:32.979672    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:32.983502    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:33.479195    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:33.479213    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:33.479223    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:33.479231    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:33.482627    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:33.978591    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:33.978614    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:33.978669    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:33.978677    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:33.982499    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:34.478777    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:34.478796    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:34.478805    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:34.478810    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:34.481463    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:28:34.979814    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:34.979835    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:34.979847    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:34.979856    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:34.984020    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:28:34.984095    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:35.478731    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:35.478759    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:35.478769    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:35.478775    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:35.482596    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:35.979086    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:35.979114    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:35.979127    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:35.979133    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:35.982826    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:36.478524    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:36.478548    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:36.478560    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:36.478568    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:36.482514    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:36.978759    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:36.978778    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:36.978789    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:36.978795    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:36.982532    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:37.478813    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:37.478836    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:37.478848    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:37.478854    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:37.482815    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:37.483027    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:37.980493    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:37.980519    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:37.980530    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:37.980535    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:37.984193    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:38.479572    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:38.479594    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:38.479607    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:38.479615    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:38.483949    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:28:38.980347    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:38.980372    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:38.980383    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:38.980388    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:38.984077    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:39.480084    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:39.480110    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:39.480121    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:39.480127    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:39.483858    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:39.483927    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:39.978886    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:39.978908    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:39.978920    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:39.978927    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:39.982482    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:40.478804    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:40.478830    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:40.478841    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:40.478847    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:40.482793    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:40.979356    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:40.979380    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:40.979392    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:40.979401    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:40.983583    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:28:41.479873    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:41.479894    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:41.479913    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:41.479918    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:41.483490    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:41.978368    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:41.978382    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:41.978389    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:41.978393    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:41.984198    4656 round_trippers.go:574] Response Status: 404 Not Found in 5 milliseconds
	I0816 10:28:41.984261    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:42.478642    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:42.478662    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:42.478675    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:42.478681    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:42.482721    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:28:42.979333    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:42.979358    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:42.979370    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:42.979376    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:42.983591    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:28:43.478780    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:43.478803    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:43.478816    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:43.478824    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:43.482771    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:43.978807    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:43.978858    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:43.978871    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:43.978878    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:43.982183    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:44.479103    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:44.479131    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:44.479208    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:44.479217    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:44.483010    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:44.483102    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:44.980168    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:44.980193    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:44.980205    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:44.980212    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:44.984284    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:28:45.478814    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:45.478840    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:45.478851    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:45.478856    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:45.482566    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:45.978463    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:45.978490    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:45.978503    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:45.978509    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:45.982104    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:46.478332    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:46.478358    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:46.478370    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:46.478376    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:46.482196    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:46.980202    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:46.980226    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:46.980235    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:46.980242    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:46.984038    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:46.984113    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:47.480191    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:47.480236    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:47.480249    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:47.480256    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:47.483962    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:47.978487    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:47.978512    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:47.978524    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:47.978529    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:47.982450    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:48.478150    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:48.478167    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:48.478183    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:48.478192    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:48.481632    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:48.978324    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:48.978347    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:48.978359    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:48.978366    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:48.982094    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:49.479467    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:49.479488    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:49.479500    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:49.479508    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:49.483304    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:49.483387    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:49.979540    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:49.979559    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:49.979567    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:49.979571    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:49.982173    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:28:50.478844    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:50.478865    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:50.478876    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:50.478882    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:50.482687    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:50.979032    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:50.979057    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:50.979069    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:50.979075    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:50.982937    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:51.477969    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:51.477985    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:51.477992    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:51.477996    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:51.480844    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:28:51.978499    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:51.978525    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:51.978594    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:51.978604    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:51.982296    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:51.982369    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:52.478660    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:52.478681    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:52.478693    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:52.478700    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:52.482493    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:52.979157    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:52.979218    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:52.979232    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:52.979243    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:52.982949    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:53.477935    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:53.477952    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:53.477964    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:53.477971    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:53.481445    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:53.979399    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:53.979426    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:53.979437    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:53.979442    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:53.983298    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:53.983373    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:54.477959    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:54.477983    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:54.477992    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:54.478000    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:54.480818    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:28:54.977914    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:54.977928    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:54.977937    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:54.977943    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:54.980985    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:55.477939    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:55.477959    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:55.477971    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:55.477980    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:55.481823    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:55.978706    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:55.978725    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:55.978734    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:55.978740    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:55.981215    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:28:56.478017    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:56.478041    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:56.478055    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:56.478066    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:56.481827    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:56.481901    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:56.979955    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:56.979976    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:56.979987    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:56.979994    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:56.984295    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:28:57.478039    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:57.478057    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:57.478067    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:57.478073    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:57.481105    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:57.978248    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:57.978270    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:57.978283    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:57.978291    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:57.982239    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:58.477943    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:58.477971    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:58.477987    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:58.478001    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:58.481727    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:58.978661    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:58.978678    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:58.978687    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:58.978693    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:58.981579    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:28:58.981644    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:59.479830    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:59.479861    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:59.479927    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:59.479949    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:59.483371    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:59.977787    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:59.977804    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:59.977810    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:59.977813    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:59.979974    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:29:00.478024    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:29:00.478039    4656 round_trippers.go:469] Request Headers:
	I0816 10:29:00.478047    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:29:00.478051    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:29:00.480707    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:29:00.979674    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:29:00.979700    4656 round_trippers.go:469] Request Headers:
	I0816 10:29:00.979712    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:29:00.979718    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:29:00.983620    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:29:00.983742    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:29:01.478022    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:29:01.478042    4656 round_trippers.go:469] Request Headers:
	I0816 10:29:01.478053    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:29:01.478060    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:29:01.481326    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:29:01.978405    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:29:01.978425    4656 round_trippers.go:469] Request Headers:
	I0816 10:29:01.978434    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:29:01.978438    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:29:01.981188    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:29:02.479658    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:29:02.479772    4656 round_trippers.go:469] Request Headers:
	I0816 10:29:02.479790    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:29:02.479798    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:29:02.483872    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:29:02.979772    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:29:02.979794    4656 round_trippers.go:469] Request Headers:
	I0816 10:29:02.979807    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:29:02.979815    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:29:02.983496    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:29:03.477789    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:29:03.477808    4656 round_trippers.go:469] Request Headers:
	I0816 10:29:03.477817    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:29:03.477821    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:29:03.480617    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:29:03.480674    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:29:03.977650    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:29:03.977672    4656 round_trippers.go:469] Request Headers:
	I0816 10:29:03.977683    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:29:03.977689    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:29:03.981168    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:29:04.479691    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:29:04.479717    4656 round_trippers.go:469] Request Headers:
	I0816 10:29:04.479729    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:29:04.479737    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:29:04.483384    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:29:04.978063    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:29:04.978077    4656 round_trippers.go:469] Request Headers:
	I0816 10:29:04.978086    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:29:04.978091    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:29:04.980657    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:29:05.479407    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:29:05.479427    4656 round_trippers.go:469] Request Headers:
	I0816 10:29:05.479438    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:29:05.479443    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:29:05.482914    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:29:05.483084    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:29:05.979238    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:29:05.979260    4656 round_trippers.go:469] Request Headers:
	I0816 10:29:05.979272    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:29:05.979280    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:29:05.982997    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:29:06.478226    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:29:06.478251    4656 round_trippers.go:469] Request Headers:
	I0816 10:29:06.478264    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:29:06.478270    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:29:06.482103    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:29:06.482169    4656 node_ready.go:38] duration metric: took 4m0.00480463s for node "ha-286000-m03" to be "Ready" ...
	I0816 10:29:06.503388    4656 out.go:201] 
	W0816 10:29:06.524396    4656 out.go:270] X Exiting due to GUEST_START: failed to start node: adding node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	W0816 10:29:06.524419    4656 out.go:270] * 
	W0816 10:29:06.525619    4656 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 10:29:06.587617    4656 out.go:201] 
	
	
	==> Docker <==
	Aug 16 17:24:28 ha-286000 cri-dockerd[1436]: time="2024-08-16T17:24:28Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/137dbec658acee61ce1910017edb0f5b3a85b75c5e3049e8bd90f1dbefcdb1c7/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Aug 16 17:24:29 ha-286000 dockerd[1187]: time="2024-08-16T17:24:28.998809824Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 16 17:24:29 ha-286000 dockerd[1187]: time="2024-08-16T17:24:28.998948255Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 16 17:24:29 ha-286000 dockerd[1187]: time="2024-08-16T17:24:28.998962428Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:24:29 ha-286000 dockerd[1187]: time="2024-08-16T17:24:28.999102266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:24:29 ha-286000 dockerd[1187]: time="2024-08-16T17:24:29.047276534Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 16 17:24:29 ha-286000 dockerd[1187]: time="2024-08-16T17:24:29.047427124Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 16 17:24:29 ha-286000 dockerd[1187]: time="2024-08-16T17:24:29.047450862Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:24:29 ha-286000 dockerd[1187]: time="2024-08-16T17:24:29.047581008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:24:29 ha-286000 dockerd[1187]: time="2024-08-16T17:24:29.126544781Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 16 17:24:29 ha-286000 dockerd[1187]: time="2024-08-16T17:24:29.126662219Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 16 17:24:29 ha-286000 dockerd[1187]: time="2024-08-16T17:24:29.126672757Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:24:29 ha-286000 dockerd[1187]: time="2024-08-16T17:24:29.126811937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:24:40 ha-286000 dockerd[1187]: time="2024-08-16T17:24:40.084727507Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 16 17:24:40 ha-286000 dockerd[1187]: time="2024-08-16T17:24:40.084839498Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 16 17:24:40 ha-286000 dockerd[1187]: time="2024-08-16T17:24:40.084854114Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:24:40 ha-286000 dockerd[1187]: time="2024-08-16T17:24:40.085367785Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:24:59 ha-286000 dockerd[1181]: time="2024-08-16T17:24:59.347142049Z" level=info msg="ignoring event" container=0c18c93270e7a68d0392d4acb324154ee21a1f857073e107621429751d23f788 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 16 17:24:59 ha-286000 dockerd[1187]: time="2024-08-16T17:24:59.347787162Z" level=info msg="shim disconnected" id=0c18c93270e7a68d0392d4acb324154ee21a1f857073e107621429751d23f788 namespace=moby
	Aug 16 17:24:59 ha-286000 dockerd[1187]: time="2024-08-16T17:24:59.347864246Z" level=warning msg="cleaning up after shim disconnected" id=0c18c93270e7a68d0392d4acb324154ee21a1f857073e107621429751d23f788 namespace=moby
	Aug 16 17:24:59 ha-286000 dockerd[1187]: time="2024-08-16T17:24:59.347873243Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 16 17:25:12 ha-286000 dockerd[1187]: time="2024-08-16T17:25:12.082815222Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 16 17:25:12 ha-286000 dockerd[1187]: time="2024-08-16T17:25:12.082919934Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 16 17:25:12 ha-286000 dockerd[1187]: time="2024-08-16T17:25:12.082946545Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:25:12 ha-286000 dockerd[1187]: time="2024-08-16T17:25:12.083100138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8803f7012c881       6e38f40d628db                                                                                         3 minutes ago       Running             storage-provisioner       4                   fca40ed5fc112       storage-provisioner
	88937b4d9b3fc       045733566833c                                                                                         4 minutes ago       Running             kube-controller-manager   2                   b20f8615dee49       kube-controller-manager-ha-286000
	fdeb6586df346       8c811b4aec35f                                                                                         4 minutes ago       Running             busybox                   1                   137dbec658ace       busybox-7dff88458-dvmvk
	f9023c4cc7d09       12968670680f4                                                                                         4 minutes ago       Running             kindnet-cni               1                   b874afa97d609       kindnet-whqxb
	3cf3b8e6c2561       cbb01a7bd410d                                                                                         4 minutes ago       Running             coredns                   1                   0b81f15659889       coredns-6f6b679f8f-2kqjf
	0c18c93270e7a       6e38f40d628db                                                                                         4 minutes ago       Exited              storage-provisioner       3                   fca40ed5fc112       storage-provisioner
	5cf894bf46807       cbb01a7bd410d                                                                                         4 minutes ago       Running             coredns                   1                   26513e2b92d66       coredns-6f6b679f8f-rfbz7
	60feb425249e9       ad83b2ca7b09e                                                                                         4 minutes ago       Running             kube-proxy                1                   8008f00487db3       kube-proxy-w4nt2
	2d90cfc5f1d77       38af8ddebf499                                                                                         5 minutes ago       Running             kube-vip                  0                   bda0d9ff673b9       kube-vip-ha-286000
	77cac41fb9bde       2e96e5913fc06                                                                                         5 minutes ago       Running             etcd                      1                   5ee84d4289ece       etcd-ha-286000
	bcd696090d544       1766f54c897f0                                                                                         5 minutes ago       Running             kube-scheduler            1                   97f04e9e38892       kube-scheduler-ha-286000
	64b3c5f995d8d       604f5db92eaa8                                                                                         5 minutes ago       Running             kube-apiserver            4                   8d4b6b4a23609       kube-apiserver-ha-286000
	257f5b412fe2a       045733566833c                                                                                         5 minutes ago       Exited              kube-controller-manager   1                   b20f8615dee49       kube-controller-manager-ha-286000
	63b366c951f2a       604f5db92eaa8                                                                                         7 minutes ago       Exited              kube-apiserver            3                   818ee6dafe6c9       kube-apiserver-ha-286000
	5bbe7a8cc492e       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   24 minutes ago      Exited              busybox                   0                   1873ade92edb9       busybox-7dff88458-dvmvk
	bcd7170b050a5       cbb01a7bd410d                                                                                         26 minutes ago      Exited              coredns                   0                   452942e267927       coredns-6f6b679f8f-rfbz7
	60d3d03e297ce       cbb01a7bd410d                                                                                         26 minutes ago      Exited              coredns                   0                   fbd84fb813c90       coredns-6f6b679f8f-2kqjf
	2677394dcd66f       kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166              26 minutes ago      Exited              kindnet-cni               0                   afaf657e85c32       kindnet-whqxb
	81f6c96d46494       ad83b2ca7b09e                                                                                         26 minutes ago      Exited              kube-proxy                0                   dfb822fc27b5e       kube-proxy-w4nt2
	f7b2e9efdd94f       1766f54c897f0                                                                                         26 minutes ago      Exited              kube-scheduler            0                   8e2b186980aff       kube-scheduler-ha-286000
	d8dadff6cec78       2e96e5913fc06                                                                                         26 minutes ago      Exited              etcd                      0                   cdb14ff7d8896       etcd-ha-286000
	
	
	==> coredns [3cf3b8e6c256] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:58071 - 29432 "HINFO IN 269282700017442046.6298598734389881778. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.104629212s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1710767206]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (16-Aug-2024 17:24:29.307) (total time: 30004ms):
	Trace[1710767206]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30004ms (17:24:59.311)
	Trace[1710767206]: [30.004743477s] [30.004743477s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1321835322]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (16-Aug-2024 17:24:29.307) (total time: 30005ms):
	Trace[1321835322]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30004ms (17:24:59.312)
	Trace[1321835322]: [30.005483265s] [30.005483265s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[816453993]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (16-Aug-2024 17:24:29.312) (total time: 30003ms):
	Trace[816453993]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (17:24:59.315)
	Trace[816453993]: [30.003551219s] [30.003551219s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [5cf894bf4680] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:51498 - 60294 "HINFO IN 6373854949728581283.8966112489703867485. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.072467092s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[817614149]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (16-Aug-2024 17:24:29.307) (total time: 30005ms):
	Trace[817614149]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30004ms (17:24:59.311)
	Trace[817614149]: [30.005208149s] [30.005208149s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1980986726]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (16-Aug-2024 17:24:29.307) (total time: 30005ms):
	Trace[1980986726]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30004ms (17:24:59.311)
	Trace[1980986726]: [30.005923834s] [30.005923834s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1722306438]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (16-Aug-2024 17:24:29.312) (total time: 30003ms):
	Trace[1722306438]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (17:24:59.315)
	Trace[1722306438]: [30.003847815s] [30.003847815s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [60d3d03e297c] <==
	[INFO] plugin/kubernetes: Trace[1595166943]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (16-Aug-2024 17:19:50.818) (total time: 11830ms):
	Trace[1595166943]: ---"Objects listed" error:Unauthorized 11830ms (17:20:02.649)
	Trace[1595166943]: [11.830466351s] [11.830466351s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=2678": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=2678": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: Trace[852140040]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (16-Aug-2024 17:20:06.131) (total time: 10521ms):
	Trace[852140040]: ---"Objects listed" error:Unauthorized 10521ms (17:20:16.652)
	Trace[852140040]: [10.521589006s] [10.521589006s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [bcd7170b050a] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: Trace[1786059905]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (16-Aug-2024 17:20:05.425) (total time: 11223ms):
	Trace[1786059905]: ---"Objects listed" error:Unauthorized 11223ms (17:20:16.649)
	Trace[1786059905]: [11.223878813s] [11.223878813s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[ERROR] plugin/kubernetes: Unexpected error when reading response body: http2: server sent GOAWAY and closed the connection; LastStreamID=5, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: unexpected error when reading response body. Please retry. Original error: http2: server sent GOAWAY and closed the connection; LastStreamID=5, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: Trace[1902597424]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (16-Aug-2024 17:20:18.397) (total time: 12364ms):
	Trace[1902597424]: ---"Objects listed" error:unexpected error when reading response body. Please retry. Original error: http2: server sent GOAWAY and closed the connection; LastStreamID=5, ErrCode=NO_ERROR, debug="" 12364ms (17:20:30.761)
	Trace[1902597424]: [12.364669513s] [12.364669513s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: unexpected error when reading response body. Please retry. Original error: http2: server sent GOAWAY and closed the connection; LastStreamID=5, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: Unexpected error when reading response body: http2: server sent GOAWAY and closed the connection; LastStreamID=5, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: unexpected error when reading response body. Please retry. Original error: http2: server sent GOAWAY and closed the connection; LastStreamID=5, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: unexpected error when reading response body. Please retry. Original error: http2: server sent GOAWAY and closed the connection; LastStreamID=5, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-286000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-286000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd
	                    minikube.k8s.io/name=ha-286000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_16T10_02_26_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 17:02:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-286000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 17:29:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 17:24:20 +0000   Fri, 16 Aug 2024 17:22:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 17:24:20 +0000   Fri, 16 Aug 2024 17:22:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 17:24:20 +0000   Fri, 16 Aug 2024 17:22:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 17:24:20 +0000   Fri, 16 Aug 2024 17:22:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.5
	  Hostname:    ha-286000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 2010adee17654cf9b80256054061ea5a
	  System UUID:                ad96408c-0000-0000-89eb-d74e5b68d297
	  Boot ID:                    bef9467e-8834-4316-92a2-f595c590a856
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-dvmvk              0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	  kube-system                 coredns-6f6b679f8f-2kqjf             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     26m
	  kube-system                 coredns-6f6b679f8f-rfbz7             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     26m
	  kube-system                 etcd-ha-286000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         26m
	  kube-system                 kindnet-whqxb                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      26m
	  kube-system                 kube-apiserver-ha-286000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         26m
	  kube-system                 kube-controller-manager-ha-286000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         26m
	  kube-system                 kube-proxy-w4nt2                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         26m
	  kube-system                 kube-scheduler-ha-286000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         26m
	  kube-system                 kube-vip-ha-286000                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m40s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         26m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m39s                  kube-proxy       
	  Normal  Starting                 26m                    kube-proxy       
	  Normal  NodeHasSufficientMemory  26m (x8 over 26m)      kubelet          Node ha-286000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  26m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     26m (x7 over 26m)      kubelet          Node ha-286000 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    26m (x8 over 26m)      kubelet          Node ha-286000 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 26m                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  26m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 26m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           26m                    node-controller  Node ha-286000 event: Registered Node ha-286000 in Controller
	  Normal  RegisteredNode           25m                    node-controller  Node ha-286000 event: Registered Node ha-286000 in Controller
	  Normal  RegisteredNode           8m21s                  node-controller  Node ha-286000 event: Registered Node ha-286000 in Controller
	  Normal  NodeNotReady             6m58s (x2 over 8m23s)  node-controller  Node ha-286000 status is now: NodeNotReady
	  Normal  NodeHasSufficientMemory  6m37s (x3 over 26m)    kubelet          Node ha-286000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m37s (x3 over 26m)    kubelet          Node ha-286000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m37s (x3 over 26m)    kubelet          Node ha-286000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                6m37s (x3 over 26m)    kubelet          Node ha-286000 status is now: NodeReady
	  Normal  Starting                 5m19s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m18s (x8 over 5m18s)  kubelet          Node ha-286000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m18s (x8 over 5m18s)  kubelet          Node ha-286000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m18s (x7 over 5m18s)  kubelet          Node ha-286000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m46s                  node-controller  Node ha-286000 event: Registered Node ha-286000 in Controller
	  Normal  RegisteredNode           4m26s                  node-controller  Node ha-286000 event: Registered Node ha-286000 in Controller
	
	
	Name:               ha-286000-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-286000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd
	                    minikube.k8s.io/name=ha-286000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_16T10_03_22_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 17:03:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-286000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 17:28:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 17:24:23 +0000   Fri, 16 Aug 2024 17:22:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 17:24:23 +0000   Fri, 16 Aug 2024 17:22:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 17:24:23 +0000   Fri, 16 Aug 2024 17:22:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 17:24:23 +0000   Fri, 16 Aug 2024 17:22:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.6
	  Hostname:    ha-286000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 ee275d4bd6234ce08a6c7d60b8d19b43
	  System UUID:                f7634935-0000-0000-a751-ac72999da031
	  Boot ID:                    035257b9-18e7-4adc-8e61-b35126468d96
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-k9m92                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	  kube-system                 etcd-ha-286000-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         25m
	  kube-system                 kindnet-t9kjf                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      25m
	  kube-system                 kube-apiserver-ha-286000-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         25m
	  kube-system                 kube-controller-manager-ha-286000-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         25m
	  kube-system                 kube-proxy-pt669                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         25m
	  kube-system                 kube-scheduler-ha-286000-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         25m
	  kube-system                 kube-vip-ha-286000-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         25m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m33s                  kube-proxy       
	  Normal  Starting                 8m6s                   kube-proxy       
	  Normal  Starting                 25m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  25m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  25m (x8 over 25m)      kubelet          Node ha-286000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    25m (x8 over 25m)      kubelet          Node ha-286000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     25m (x7 over 25m)      kubelet          Node ha-286000-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           25m                    node-controller  Node ha-286000-m02 event: Registered Node ha-286000-m02 in Controller
	  Normal  RegisteredNode           25m                    node-controller  Node ha-286000-m02 event: Registered Node ha-286000-m02 in Controller
	  Normal  NodeAllocatableEnforced  8m34s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 8m34s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m34s (x8 over 8m34s)  kubelet          Node ha-286000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m34s (x8 over 8m34s)  kubelet          Node ha-286000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m34s (x7 over 8m34s)  kubelet          Node ha-286000-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           8m21s                  node-controller  Node ha-286000-m02 event: Registered Node ha-286000-m02 in Controller
	  Normal  NodeNotReady             7m3s                   node-controller  Node ha-286000-m02 status is now: NodeNotReady
	  Normal  Starting                 4m59s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m59s (x8 over 4m59s)  kubelet          Node ha-286000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m59s (x8 over 4m59s)  kubelet          Node ha-286000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m59s (x7 over 4m59s)  kubelet          Node ha-286000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m59s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m46s                  node-controller  Node ha-286000-m02 event: Registered Node ha-286000-m02 in Controller
	  Normal  RegisteredNode           4m26s                  node-controller  Node ha-286000-m02 event: Registered Node ha-286000-m02 in Controller
	
	
	Name:               ha-286000-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-286000-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd
	                    minikube.k8s.io/name=ha-286000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_16T10_17_22_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 17:17:21 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-286000-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 17:22:50 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 16 Aug 2024 17:22:27 +0000   Fri, 16 Aug 2024 17:25:02 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 16 Aug 2024 17:22:27 +0000   Fri, 16 Aug 2024 17:25:02 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 16 Aug 2024 17:22:27 +0000   Fri, 16 Aug 2024 17:25:02 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 16 Aug 2024 17:22:27 +0000   Fri, 16 Aug 2024 17:25:02 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.169.0.8
	  Hostname:    ha-286000-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 324c7ca05f77443abc4e861a3d5a5224
	  System UUID:                9a6645c6-0000-0000-8cbd-49b6a6a0383b
	  Boot ID:                    839ab079-775d-4939-ac8e-9fb255ba29df
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-99xmp    0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	  kube-system                 kindnet-b9r6s              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      11m
	  kube-system                 kube-proxy-5qhgk           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                   From             Message
	  ----    ------                   ----                  ----             -------
	  Normal  Starting                 11m                   kube-proxy       
	  Normal  NodeAllocatableEnforced  11m                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           11m                   node-controller  Node ha-286000-m04 event: Registered Node ha-286000-m04 in Controller
	  Normal  RegisteredNode           11m                   node-controller  Node ha-286000-m04 event: Registered Node ha-286000-m04 in Controller
	  Normal  RegisteredNode           8m21s                 node-controller  Node ha-286000-m04 event: Registered Node ha-286000-m04 in Controller
	  Normal  NodeNotReady             7m3s (x2 over 8m23s)  node-controller  Node ha-286000-m04 status is now: NodeNotReady
	  Normal  NodeHasSufficientPID     6m41s (x4 over 11m)   kubelet          Node ha-286000-m04 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  6m41s (x4 over 11m)   kubelet          Node ha-286000-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m41s (x4 over 11m)   kubelet          Node ha-286000-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeReady                6m41s (x3 over 11m)   kubelet          Node ha-286000-m04 status is now: NodeReady
	  Normal  RegisteredNode           4m46s                 node-controller  Node ha-286000-m04 event: Registered Node ha-286000-m04 in Controller
	  Normal  RegisteredNode           4m26s                 node-controller  Node ha-286000-m04 event: Registered Node ha-286000-m04 in Controller
	  Normal  NodeNotReady             4m6s                  node-controller  Node ha-286000-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.035803] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20200925/tbprint-173)
	[  +0.008121] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	[  +5.699152] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000002] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.007082] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.882621] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.230843] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.349832] systemd-fstab-generator[472]: Ignoring "noauto" option for root device
	[  +0.095939] systemd-fstab-generator[484]: Ignoring "noauto" option for root device
	[  +2.008291] systemd-fstab-generator[1111]: Ignoring "noauto" option for root device
	[  +0.258306] systemd-fstab-generator[1147]: Ignoring "noauto" option for root device
	[  +0.099664] systemd-fstab-generator[1159]: Ignoring "noauto" option for root device
	[  +0.061191] kauditd_printk_skb: 123 callbacks suppressed
	[  +0.060084] systemd-fstab-generator[1173]: Ignoring "noauto" option for root device
	[  +2.467356] systemd-fstab-generator[1389]: Ignoring "noauto" option for root device
	[  +0.100054] systemd-fstab-generator[1401]: Ignoring "noauto" option for root device
	[  +0.107009] systemd-fstab-generator[1413]: Ignoring "noauto" option for root device
	[  +0.132145] systemd-fstab-generator[1428]: Ignoring "noauto" option for root device
	[  +0.458193] systemd-fstab-generator[1593]: Ignoring "noauto" option for root device
	[  +6.918226] kauditd_printk_skb: 190 callbacks suppressed
	[Aug16 17:24] kauditd_printk_skb: 40 callbacks suppressed
	[ +21.525016] kauditd_printk_skb: 82 callbacks suppressed
	
	
	==> etcd [77cac41fb9bd] <==
	{"level":"info","ts":"2024-08-16T17:24:16.876969Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"9633c02797b6d34"}
	{"level":"info","ts":"2024-08-16T17:24:16.894983Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"9633c02797b6d34","stream-type":"stream Message"}
	{"level":"info","ts":"2024-08-16T17:24:16.895123Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"9633c02797b6d34"}
	{"level":"info","ts":"2024-08-16T17:24:16.966205Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"9633c02797b6d34","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-08-16T17:24:16.966225Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"9633c02797b6d34"}
	{"level":"warn","ts":"2024-08-16T17:24:17.366286Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740435865607170,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-08-16T17:24:17.388375Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"9633c02797b6d34","rtt":"0s","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-16T17:24:17.389606Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"9633c02797b6d34","rtt":"0s","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"info","ts":"2024-08-16T17:24:17.664302Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 3, index: 4621, vote: b8c6c7563d17d844] cast MsgPreVote for 9633c02797b6d34 [logterm: 3, index: 4621] at term 3"}
	{"level":"info","ts":"2024-08-16T17:24:17.667262Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [term: 3] received a MsgVote message with higher term from 9633c02797b6d34 [term: 4]"}
	{"level":"info","ts":"2024-08-16T17:24:17.667420Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became follower at term 4"}
	{"level":"info","ts":"2024-08-16T17:24:17.667474Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 3, index: 4621, vote: 0] cast MsgVote for 9633c02797b6d34 [logterm: 3, index: 4621] at term 4"}
	{"level":"info","ts":"2024-08-16T17:24:17.668493Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b8c6c7563d17d844 elected leader 9633c02797b6d34 at term 4"}
	{"level":"warn","ts":"2024-08-16T17:24:17.668980Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"3.305918651s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: leader changed"}
	{"level":"info","ts":"2024-08-16T17:24:17.669025Z","caller":"traceutil/trace.go:171","msg":"trace[958839912] range","detail":"{range_begin:; range_end:; }","duration":"3.306272649s","start":"2024-08-16T17:24:14.362747Z","end":"2024-08-16T17:24:17.669020Z","steps":["trace[958839912] 'agreement among raft nodes before linearized reading'  (duration: 3.305917726s)"],"step_count":1}
	{"level":"error","ts":"2024-08-16T17:24:17.669050Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[+]serializable_read ok\n[-]linearizable_read failed: etcdserver: leader changed\n[+]data_corruption ok\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	{"level":"info","ts":"2024-08-16T17:24:17.672550Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"b8c6c7563d17d844","local-member-attributes":"{Name:ha-286000 ClientURLs:[https://192.169.0.5:2379]}","request-path":"/0/members/b8c6c7563d17d844/attributes","cluster-id":"b73189effde9bc63","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-16T17:24:17.672690Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-16T17:24:17.673076Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-16T17:24:17.673114Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-16T17:24:17.672747Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-16T17:24:17.675839Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-16T17:24:17.676355Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-16T17:24:17.676582Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.5:2379"}
	{"level":"info","ts":"2024-08-16T17:24:17.677166Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [d8dadff6cec7] <==
	{"level":"info","ts":"2024-08-16T17:23:23.603134Z","caller":"traceutil/trace.go:171","msg":"trace[1695899457] range","detail":"{range_begin:/registry/persistentvolumeclaims/; range_end:/registry/persistentvolumeclaims0; }","duration":"7.280387387s","start":"2024-08-16T17:23:16.322744Z","end":"2024-08-16T17:23:23.603132Z","steps":["trace[1695899457] 'agreement among raft nodes before linearized reading'  (duration: 7.280377262s)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T17:23:23.603145Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T17:23:16.322710Z","time spent":"7.280431347s","remote":"127.0.0.1:56178","response type":"/etcdserverpb.KV/Range","request count":0,"request size":72,"response count":0,"response size":0,"request content":"key:\"/registry/persistentvolumeclaims/\" range_end:\"/registry/persistentvolumeclaims0\" count_only:true "}
	2024/08/16 17:23:23 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-16T17:23:23.603197Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"3.204231928s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/validatingadmissionpolicies/\" range_end:\"/registry/validatingadmissionpolicies0\" count_only:true ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-08-16T17:23:23.603208Z","caller":"traceutil/trace.go:171","msg":"trace[821531247] range","detail":"{range_begin:/registry/validatingadmissionpolicies/; range_end:/registry/validatingadmissionpolicies0; }","duration":"3.204245539s","start":"2024-08-16T17:23:20.398959Z","end":"2024-08-16T17:23:23.603205Z","steps":["trace[821531247] 'agreement among raft nodes before linearized reading'  (duration: 3.204231749s)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T17:23:23.603218Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T17:23:20.398944Z","time spent":"3.204271101s","remote":"127.0.0.1:56532","response type":"/etcdserverpb.KV/Range","request count":0,"request size":82,"response count":0,"response size":0,"request content":"key:\"/registry/validatingadmissionpolicies/\" range_end:\"/registry/validatingadmissionpolicies0\" count_only:true "}
	2024/08/16 17:23:23 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-16T17:23:23.604807Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740435533448225,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-08-16T17:23:23.605017Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 3"}
	{"level":"info","ts":"2024-08-16T17:23:23.605028Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-08-16T17:23:23.605034Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 3"}
	{"level":"info","ts":"2024-08-16T17:23:23.605042Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 3, index: 4621] sent MsgPreVote request to 9633c02797b6d34 at term 3"}
	{"level":"warn","ts":"2024-08-16T17:23:23.646548Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-16T17:23:23.646617Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-16T17:23:23.646652Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"b8c6c7563d17d844","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-08-16T17:23:23.647836Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"9633c02797b6d34"}
	{"level":"info","ts":"2024-08-16T17:23:23.647877Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"9633c02797b6d34"}
	{"level":"info","ts":"2024-08-16T17:23:23.647896Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"9633c02797b6d34"}
	{"level":"info","ts":"2024-08-16T17:23:23.648043Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"9633c02797b6d34"}
	{"level":"info","ts":"2024-08-16T17:23:23.648105Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"9633c02797b6d34"}
	{"level":"info","ts":"2024-08-16T17:23:23.648130Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"9633c02797b6d34"}
	{"level":"info","ts":"2024-08-16T17:23:23.648158Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"9633c02797b6d34"}
	{"level":"info","ts":"2024-08-16T17:23:23.650448Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-08-16T17:23:23.650508Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-08-16T17:23:23.650516Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-286000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.5:2380"],"advertise-client-urls":["https://192.169.0.5:2379"]}
	
	
	==> kernel <==
	 17:29:09 up 5 min,  0 users,  load average: 0.03, 0.06, 0.02
	Linux ha-286000 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [2677394dcd66] <==
	I0816 17:22:35.224951       1 main.go:322] Node ha-286000-m04 has CIDR [10.244.2.0/24] 
	I0816 17:22:45.231619       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0816 17:22:45.231806       1 main.go:299] handling current node
	I0816 17:22:45.231910       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0816 17:22:45.231994       1 main.go:322] Node ha-286000-m02 has CIDR [10.244.1.0/24] 
	I0816 17:22:45.232158       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0816 17:22:45.232263       1 main.go:322] Node ha-286000-m04 has CIDR [10.244.2.0/24] 
	I0816 17:22:55.225733       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0816 17:22:55.225894       1 main.go:299] handling current node
	I0816 17:22:55.225954       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0816 17:22:55.226004       1 main.go:322] Node ha-286000-m02 has CIDR [10.244.1.0/24] 
	I0816 17:22:55.226143       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0816 17:22:55.226223       1 main.go:322] Node ha-286000-m04 has CIDR [10.244.2.0/24] 
	I0816 17:23:05.224175       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0816 17:23:05.224416       1 main.go:299] handling current node
	I0816 17:23:05.224540       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0816 17:23:05.224830       1 main.go:322] Node ha-286000-m02 has CIDR [10.244.1.0/24] 
	I0816 17:23:05.225112       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0816 17:23:05.225305       1 main.go:322] Node ha-286000-m04 has CIDR [10.244.2.0/24] 
	I0816 17:23:15.226037       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0816 17:23:15.226204       1 main.go:299] handling current node
	I0816 17:23:15.226257       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0816 17:23:15.226357       1 main.go:322] Node ha-286000-m02 has CIDR [10.244.1.0/24] 
	I0816 17:23:15.226471       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0816 17:23:15.226617       1 main.go:322] Node ha-286000-m04 has CIDR [10.244.2.0/24] 
	
	
	==> kindnet [f9023c4cc7d0] <==
	I0816 17:28:20.463105       1 main.go:322] Node ha-286000-m04 has CIDR [10.244.2.0/24] 
	I0816 17:28:30.454540       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0816 17:28:30.454636       1 main.go:299] handling current node
	I0816 17:28:30.454654       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0816 17:28:30.454663       1 main.go:322] Node ha-286000-m02 has CIDR [10.244.1.0/24] 
	I0816 17:28:30.455097       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0816 17:28:30.455153       1 main.go:322] Node ha-286000-m04 has CIDR [10.244.2.0/24] 
	I0816 17:28:40.454971       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0816 17:28:40.455047       1 main.go:299] handling current node
	I0816 17:28:40.455069       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0816 17:28:40.455125       1 main.go:322] Node ha-286000-m02 has CIDR [10.244.1.0/24] 
	I0816 17:28:40.455273       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0816 17:28:40.455375       1 main.go:322] Node ha-286000-m04 has CIDR [10.244.2.0/24] 
	I0816 17:28:50.455763       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0816 17:28:50.455828       1 main.go:299] handling current node
	I0816 17:28:50.455839       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0816 17:28:50.455844       1 main.go:322] Node ha-286000-m02 has CIDR [10.244.1.0/24] 
	I0816 17:28:50.456111       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0816 17:28:50.456162       1 main.go:322] Node ha-286000-m04 has CIDR [10.244.2.0/24] 
	I0816 17:29:00.454581       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0816 17:29:00.454775       1 main.go:299] handling current node
	I0816 17:29:00.454872       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0816 17:29:00.454929       1 main.go:322] Node ha-286000-m02 has CIDR [10.244.1.0/24] 
	I0816 17:29:00.455117       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0816 17:29:00.455196       1 main.go:322] Node ha-286000-m04 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [63b366c951f2] <==
	W0816 17:23:23.632159       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:23:23.632202       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:23:23.632235       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:23:23.632264       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:23:23.632290       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:23:23.632317       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:23:23.632341       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:23:23.632369       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:23:23.632396       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:23:23.632421       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:23:23.632450       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:23:23.632476       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:23:23.632504       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:23:23.632531       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:23:23.632590       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0816 17:23:23.633069       1 controller.go:195] "Failed to update lease" err="rpc error: code = Unknown desc = malformed header: missing HTTP content-type"
	W0816 17:23:23.633101       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:23:23.633121       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:23:23.633137       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:23:23.633159       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:23:23.633175       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:23:23.633190       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:23:23.633205       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:23:23.633221       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:23:23.633236       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [64b3c5f995d8] <==
	I0816 17:24:18.567658       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0816 17:24:18.568203       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0816 17:24:18.568352       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0816 17:24:18.635954       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0816 17:24:18.636020       1 policy_source.go:224] refreshing policies
	I0816 17:24:18.661089       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0816 17:24:18.661333       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0816 17:24:18.665098       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0816 17:24:18.665805       1 shared_informer.go:320] Caches are synced for configmaps
	I0816 17:24:18.666159       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0816 17:24:18.666396       1 aggregator.go:171] initial CRD sync complete...
	I0816 17:24:18.666573       1 autoregister_controller.go:144] Starting autoregister controller
	I0816 17:24:18.669371       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0816 17:24:18.669507       1 cache.go:39] Caches are synced for autoregister controller
	I0816 17:24:18.669649       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0816 17:24:18.673264       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0816 17:24:18.673925       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0816 17:24:18.676414       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0816 17:24:18.681474       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	E0816 17:24:18.693871       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0816 17:24:18.734462       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0816 17:24:19.567976       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0816 17:24:19.905347       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.5 192.169.0.6]
	I0816 17:24:19.907243       1 controller.go:615] quota admission added evaluator for: endpoints
	I0816 17:24:19.913024       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [257f5b412fe2] <==
	I0816 17:23:57.992802       1 serving.go:386] Generated self-signed cert in-memory
	I0816 17:23:58.299343       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0816 17:23:58.299552       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 17:23:58.302121       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0816 17:23:58.302479       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0816 17:23:58.302580       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0816 17:23:58.303517       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0816 17:24:18.587870       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: forbidden: User \"system:kube-controller-manager\" cannot get path \"/healthz\""
	
	
	==> kube-controller-manager [88937b4d9b3f] <==
	I0816 17:24:42.505619       1 shared_informer.go:320] Caches are synced for daemon sets
	I0816 17:24:42.533222       1 shared_informer.go:320] Caches are synced for resource quota
	I0816 17:24:42.560393       1 shared_informer.go:320] Caches are synced for taint
	I0816 17:24:42.560879       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0816 17:24:42.561245       1 shared_informer.go:320] Caches are synced for resource quota
	I0816 17:24:42.562839       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-286000"
	I0816 17:24:42.562900       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-286000-m02"
	I0816 17:24:42.562932       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-286000-m04"
	I0816 17:24:42.563154       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0816 17:24:42.951528       1 shared_informer.go:320] Caches are synced for garbage collector
	I0816 17:24:42.974213       1 shared_informer.go:320] Caches are synced for garbage collector
	I0816 17:24:42.974443       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0816 17:25:02.082814       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000-m04"
	I0816 17:25:02.095196       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000-m04"
	I0816 17:25:02.127968       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="39.027587ms"
	I0816 17:25:02.128030       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="24.447µs"
	I0816 17:25:02.643392       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000-m04"
	I0816 17:25:07.139420       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000-m04"
	I0816 17:25:08.371423       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-mqfxs EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-mqfxs\": the object has been modified; please apply your changes to the latest version and try again"
	I0816 17:25:08.371686       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"fd9fe61f-ffc6-4f61-848a-91dfce599e44", APIVersion:"v1", ResourceVersion:"301", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-mqfxs EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-mqfxs": the object has been modified; please apply your changes to the latest version and try again
	I0816 17:25:08.374007       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="29.442371ms"
	I0816 17:25:08.393173       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-mqfxs EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-mqfxs\": the object has been modified; please apply your changes to the latest version and try again"
	I0816 17:25:08.393688       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"fd9fe61f-ffc6-4f61-848a-91dfce599e44", APIVersion:"v1", ResourceVersion:"301", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-mqfxs EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-mqfxs": the object has been modified; please apply your changes to the latest version and try again
	I0816 17:25:08.408690       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="34.610733ms"
	I0816 17:25:08.408944       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="211.164µs"
	
	
	==> kube-proxy [60feb425249e] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0816 17:24:29.419881       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0816 17:24:29.442807       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E0816 17:24:29.442895       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0816 17:24:29.500213       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0816 17:24:29.500259       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0816 17:24:29.500279       1 server_linux.go:169] "Using iptables Proxier"
	I0816 17:24:29.504235       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0816 17:24:29.504982       1 server.go:483] "Version info" version="v1.31.0"
	I0816 17:24:29.505010       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 17:24:29.508282       1 config.go:197] "Starting service config controller"
	I0816 17:24:29.508363       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0816 17:24:29.508991       1 config.go:326] "Starting node config controller"
	I0816 17:24:29.509044       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0816 17:24:29.510479       1 config.go:104] "Starting endpoint slice config controller"
	I0816 17:24:29.510508       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0816 17:24:29.609193       1 shared_informer.go:320] Caches are synced for node config
	I0816 17:24:29.609332       1 shared_informer.go:320] Caches are synced for service config
	I0816 17:24:29.610541       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [81f6c96d4649] <==
	E0816 17:18:57.696982       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2592\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 17:19:00.770881       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-286000&resourceVersion=2600": dial tcp 192.169.0.254:8443: connect: no route to host
	E0816 17:19:00.770973       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-286000&resourceVersion=2600\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 17:19:00.771455       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2678": dial tcp 192.169.0.254:8443: connect: no route to host
	E0816 17:19:00.771540       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2678\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 17:19:03.838026       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2592": dial tcp 192.169.0.254:8443: connect: no route to host
	E0816 17:19:03.838287       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2592\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 17:19:09.980567       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-286000&resourceVersion=2600": dial tcp 192.169.0.254:8443: connect: no route to host
	E0816 17:19:09.980625       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-286000&resourceVersion=2600\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 17:19:13.053000       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2592": dial tcp 192.169.0.254:8443: connect: no route to host
	E0816 17:19:13.053145       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2592\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 17:19:16.125305       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2678": dial tcp 192.169.0.254:8443: connect: no route to host
	E0816 17:19:16.125738       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2678\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 17:19:28.413017       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-286000&resourceVersion=2600": dial tcp 192.169.0.254:8443: connect: no route to host
	E0816 17:19:28.413242       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-286000&resourceVersion=2600\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 17:19:37.633251       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2592": dial tcp 192.169.0.254:8443: connect: no route to host
	E0816 17:19:37.633353       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2592\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 17:19:37.633417       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2678": dial tcp 192.169.0.254:8443: connect: no route to host
	E0816 17:19:37.633437       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2678\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 17:19:56.059814       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-286000&resourceVersion=2600": dial tcp 192.169.0.254:8443: connect: no route to host
	E0816 17:19:56.059845       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-286000&resourceVersion=2600\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 17:20:17.564736       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2592": dial tcp 192.169.0.254:8443: connect: no route to host
	E0816 17:20:17.564831       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2592\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 17:20:17.565065       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2678": dial tcp 192.169.0.254:8443: connect: no route to host
	E0816 17:20:17.565112       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2678\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-scheduler [bcd696090d54] <==
	I0816 17:23:57.780845       1 serving.go:386] Generated self-signed cert in-memory
	W0816 17:24:08.860542       1 authentication.go:370] Error looking up in-cluster authentication configuration: Get "https://192.169.0.5:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": net/http: TLS handshake timeout
	W0816 17:24:08.860585       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0816 17:24:08.860591       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0816 17:24:18.591414       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0816 17:24:18.591456       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 17:24:18.606860       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0816 17:24:18.608591       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0816 17:24:18.608692       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0816 17:24:18.609554       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0816 17:24:18.708922       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [f7b2e9efdd94] <==
	W0816 17:20:20.013337       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0816 17:20:20.013508       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0816 17:20:22.503962       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0816 17:20:22.504039       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 17:20:23.117539       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0816 17:20:23.117759       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0816 17:20:24.619908       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0816 17:20:24.620160       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 17:20:32.932878       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0816 17:20:32.932925       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 17:20:34.100467       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0816 17:20:34.100511       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 17:20:36.209664       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0816 17:20:36.209784       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 17:20:36.615553       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0816 17:20:36.615615       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 17:20:37.131529       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0816 17:20:37.131621       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 17:20:37.319247       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 17:20:37.319312       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 17:20:39.232294       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0816 17:20:39.232326       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0816 17:21:33.466903       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.169.0.5:8443/apis/policy/v1/poddisruptionbudgets?resourceVersion=2660": dial tcp 192.169.0.5:8443: connect: connection refused
	E0816 17:21:33.467202       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.169.0.5:8443/apis/policy/v1/poddisruptionbudgets?resourceVersion=2660\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	E0816 17:23:23.612582       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Aug 16 17:24:50 ha-286000 kubelet[1600]: I0816 17:24:50.120535    1600 scope.go:117] "RemoveContainer" containerID="7f657edc1d3b8f5045737982e100c854d8e59d84a21acff3cc336aa51153d837"
	Aug 16 17:24:59 ha-286000 kubelet[1600]: I0816 17:24:59.550487    1600 scope.go:117] "RemoveContainer" containerID="0529825d87ca563ea67f41e2943958d645bbb6c21324531c86fbddc26820ac06"
	Aug 16 17:24:59 ha-286000 kubelet[1600]: I0816 17:24:59.550781    1600 scope.go:117] "RemoveContainer" containerID="0c18c93270e7a68d0392d4acb324154ee21a1f857073e107621429751d23f788"
	Aug 16 17:24:59 ha-286000 kubelet[1600]: E0816 17:24:59.550950    1600 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(4805d53b-2db3-4092-a3f2-d4a854e93adc)\"" pod="kube-system/storage-provisioner" podUID="4805d53b-2db3-4092-a3f2-d4a854e93adc"
	Aug 16 17:25:12 ha-286000 kubelet[1600]: I0816 17:25:12.032904    1600 scope.go:117] "RemoveContainer" containerID="0c18c93270e7a68d0392d4acb324154ee21a1f857073e107621429751d23f788"
	Aug 16 17:25:50 ha-286000 kubelet[1600]: E0816 17:25:50.045814    1600 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 16 17:25:50 ha-286000 kubelet[1600]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 16 17:25:50 ha-286000 kubelet[1600]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 16 17:25:50 ha-286000 kubelet[1600]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 16 17:25:50 ha-286000 kubelet[1600]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 16 17:26:50 ha-286000 kubelet[1600]: E0816 17:26:50.046088    1600 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 16 17:26:50 ha-286000 kubelet[1600]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 16 17:26:50 ha-286000 kubelet[1600]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 16 17:26:50 ha-286000 kubelet[1600]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 16 17:26:50 ha-286000 kubelet[1600]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 16 17:27:50 ha-286000 kubelet[1600]: E0816 17:27:50.045671    1600 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 16 17:27:50 ha-286000 kubelet[1600]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 16 17:27:50 ha-286000 kubelet[1600]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 16 17:27:50 ha-286000 kubelet[1600]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 16 17:27:50 ha-286000 kubelet[1600]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 16 17:28:50 ha-286000 kubelet[1600]: E0816 17:28:50.046455    1600 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 16 17:28:50 ha-286000 kubelet[1600]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 16 17:28:50 ha-286000 kubelet[1600]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 16 17:28:50 ha-286000 kubelet[1600]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 16 17:28:50 ha-286000 kubelet[1600]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-286000 -n ha-286000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-286000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (373.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (96.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-amd64 -p ha-286000 node delete m03 -v=7 --alsologtostderr
E0816 10:29:35.801320    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/addons-725000/client.crt: no such file or directory" logger="UnhandledError"
E0816 10:30:35.627247    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/functional-373000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-286000 node delete m03 -v=7 --alsologtostderr: exit status 80 (1m31.890631206s)

                                                
                                                
-- stdout --
	* Deleting node m03 from cluster ha-286000
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 10:29:11.276542    4853 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:29:11.276938    4853 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:29:11.276944    4853 out.go:358] Setting ErrFile to fd 2...
	I0816 10:29:11.276948    4853 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:29:11.277678    4853 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1276/.minikube/bin
	I0816 10:29:11.278119    4853 mustload.go:65] Loading cluster: ha-286000
	I0816 10:29:11.278578    4853 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:29:11.278941    4853 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:29:11.278994    4853 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:29:11.287646    4853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52265
	I0816 10:29:11.288123    4853 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:29:11.288558    4853 main.go:141] libmachine: Using API Version  1
	I0816 10:29:11.288568    4853 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:29:11.288781    4853 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:29:11.288883    4853 main.go:141] libmachine: (ha-286000) Calling .GetState
	I0816 10:29:11.288969    4853 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:29:11.289058    4853 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 4669
	I0816 10:29:11.290080    4853 host.go:66] Checking if "ha-286000" exists ...
	I0816 10:29:11.290336    4853 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:29:11.290359    4853 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:29:11.299282    4853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52267
	I0816 10:29:11.299622    4853 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:29:11.299997    4853 main.go:141] libmachine: Using API Version  1
	I0816 10:29:11.300021    4853 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:29:11.300235    4853 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:29:11.300350    4853 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:29:11.300705    4853 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:29:11.300739    4853 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:29:11.309257    4853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52269
	I0816 10:29:11.309612    4853 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:29:11.309918    4853 main.go:141] libmachine: Using API Version  1
	I0816 10:29:11.309927    4853 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:29:11.310152    4853 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:29:11.310256    4853 main.go:141] libmachine: (ha-286000-m02) Calling .GetState
	I0816 10:29:11.310340    4853 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:29:11.310423    4853 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid from json: 4678
	I0816 10:29:11.311409    4853 host.go:66] Checking if "ha-286000-m02" exists ...
	I0816 10:29:11.311681    4853 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:29:11.311720    4853 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:29:11.320288    4853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52271
	I0816 10:29:11.320635    4853 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:29:11.320978    4853 main.go:141] libmachine: Using API Version  1
	I0816 10:29:11.320990    4853 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:29:11.321188    4853 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:29:11.321282    4853 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:29:11.321632    4853 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:29:11.321654    4853 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:29:11.330148    4853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52273
	I0816 10:29:11.330501    4853 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:29:11.330854    4853 main.go:141] libmachine: Using API Version  1
	I0816 10:29:11.330871    4853 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:29:11.331084    4853 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:29:11.331180    4853 main.go:141] libmachine: (ha-286000-m03) Calling .GetState
	I0816 10:29:11.331260    4853 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:29:11.331347    4853 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid from json: 4694
	I0816 10:29:11.332324    4853 host.go:66] Checking if "ha-286000-m03" exists ...
	I0816 10:29:11.332581    4853 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:29:11.332606    4853 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:29:11.341110    4853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52275
	I0816 10:29:11.341465    4853 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:29:11.341819    4853 main.go:141] libmachine: Using API Version  1
	I0816 10:29:11.341840    4853 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:29:11.342071    4853 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:29:11.342184    4853 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:29:11.342287    4853 api_server.go:166] Checking apiserver status ...
	I0816 10:29:11.342342    4853 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 10:29:11.342364    4853 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:29:11.342471    4853 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:29:11.342561    4853 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:29:11.342642    4853 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:29:11.342733    4853 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:29:11.383699    4853 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2047/cgroup
	W0816 10:29:11.391948    4853 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2047/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0816 10:29:11.392009    4853 ssh_runner.go:195] Run: ls
	I0816 10:29:11.395146    4853 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0816 10:29:11.398320    4853 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0816 10:29:11.420126    4853 out.go:177] * Deleting node m03 from cluster ha-286000
	I0816 10:29:11.440912    4853 host.go:66] Checking if "ha-286000-m03" exists ...
	I0816 10:29:11.441203    4853 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:29:11.441229    4853 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:29:11.449817    4853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52279
	I0816 10:29:11.450180    4853 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:29:11.450525    4853 main.go:141] libmachine: Using API Version  1
	I0816 10:29:11.450542    4853 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:29:11.450741    4853 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:29:11.450859    4853 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:29:11.450948    4853 mustload.go:65] Loading cluster: ha-286000
	I0816 10:29:11.451110    4853 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:29:11.451319    4853 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:29:11.451345    4853 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:29:11.459712    4853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52281
	I0816 10:29:11.460037    4853 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:29:11.460357    4853 main.go:141] libmachine: Using API Version  1
	I0816 10:29:11.460376    4853 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:29:11.460608    4853 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:29:11.460726    4853 main.go:141] libmachine: (ha-286000) Calling .GetState
	I0816 10:29:11.460838    4853 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:29:11.460914    4853 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 4669
	I0816 10:29:11.461918    4853 host.go:66] Checking if "ha-286000" exists ...
	I0816 10:29:11.462171    4853 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:29:11.462197    4853 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:29:11.470479    4853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52283
	I0816 10:29:11.470831    4853 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:29:11.471157    4853 main.go:141] libmachine: Using API Version  1
	I0816 10:29:11.471167    4853 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:29:11.471362    4853 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:29:11.471509    4853 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:29:11.471852    4853 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:29:11.471873    4853 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:29:11.480297    4853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52285
	I0816 10:29:11.480614    4853 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:29:11.480958    4853 main.go:141] libmachine: Using API Version  1
	I0816 10:29:11.480977    4853 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:29:11.481177    4853 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:29:11.481295    4853 main.go:141] libmachine: (ha-286000-m02) Calling .GetState
	I0816 10:29:11.481369    4853 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:29:11.481439    4853 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid from json: 4678
	I0816 10:29:11.482430    4853 host.go:66] Checking if "ha-286000-m02" exists ...
	I0816 10:29:11.482679    4853 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:29:11.482702    4853 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:29:11.490975    4853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52287
	I0816 10:29:11.491309    4853 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:29:11.491646    4853 main.go:141] libmachine: Using API Version  1
	I0816 10:29:11.491664    4853 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:29:11.491866    4853 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:29:11.491969    4853 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:29:11.492283    4853 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:29:11.492318    4853 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:29:11.500552    4853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52289
	I0816 10:29:11.500875    4853 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:29:11.501192    4853 main.go:141] libmachine: Using API Version  1
	I0816 10:29:11.501208    4853 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:29:11.501414    4853 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:29:11.501522    4853 main.go:141] libmachine: (ha-286000-m03) Calling .GetState
	I0816 10:29:11.501601    4853 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:29:11.501685    4853 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid from json: 4694
	I0816 10:29:11.502684    4853 host.go:66] Checking if "ha-286000-m03" exists ...
	I0816 10:29:11.502933    4853 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:29:11.502955    4853 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:29:11.511248    4853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52291
	I0816 10:29:11.511617    4853 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:29:11.511954    4853 main.go:141] libmachine: Using API Version  1
	I0816 10:29:11.511965    4853 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:29:11.512184    4853 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:29:11.512281    4853 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:29:11.512391    4853 api_server.go:166] Checking apiserver status ...
	I0816 10:29:11.512451    4853 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 10:29:11.512472    4853 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:29:11.512584    4853 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:29:11.512664    4853 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:29:11.512758    4853 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:29:11.512835    4853 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:29:11.554103    4853 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2047/cgroup
	W0816 10:29:11.562052    4853 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2047/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0816 10:29:11.562096    4853 ssh_runner.go:195] Run: ls
	I0816 10:29:11.565274    4853 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0816 10:29:11.568324    4853 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0816 10:29:11.568378    4853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl drain ha-286000-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data
	W0816 10:29:11.635563    4853 node.go:126] kubectl drain node "ha-286000-m03" failed (will continue): sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl drain ha-286000-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data: Process exited with status 1
	stdout:
	
	stderr:
	Error from server (NotFound): nodes "ha-286000-m03" not found
	I0816 10:29:11.635640    4853 ssh_runner.go:195] Run: systemctl --version
	I0816 10:29:11.635655    4853 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:29:11.635789    4853 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:29:11.635884    4853 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:29:11.635975    4853 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:29:11.636062    4853 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/id_rsa Username:docker}
	I0816 10:29:11.670773    4853 ssh_runner.go:195] Run: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock"
	I0816 10:29:11.730629    4853 node.go:155] successfully reset node "ha-286000-m03"
	I0816 10:29:11.731185    4853 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19461-1276/kubeconfig
	I0816 10:29:11.731423    4853 kapi.go:59] client config for ha-286000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.key", CAFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x93a8f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0816 10:29:11.731768    4853 cert_rotation.go:140] Starting client certificate rotation controller
	I0816 10:29:11.732031    4853 round_trippers.go:463] DELETE https://192.169.0.254:8443/api/v1/nodes/ha-286000-m03
	I0816 10:29:11.732039    4853 round_trippers.go:469] Request Headers:
	I0816 10:29:11.732046    4853 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:29:11.732049    4853 round_trippers.go:473]     Content-Type: application/json
	I0816 10:29:11.732052    4853 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:29:11.738139    4853 round_trippers.go:574] Response Status: 404 Not Found in 6 milliseconds
	I0816 10:29:11.738275    4853 retry.go:31] will retry after 537.409731ms: nodes "ha-286000-m03" not found
	I0816 10:29:12.277448    4853 round_trippers.go:463] DELETE https://192.169.0.254:8443/api/v1/nodes/ha-286000-m03
	I0816 10:29:12.277470    4853 round_trippers.go:469] Request Headers:
	I0816 10:29:12.277481    4853 round_trippers.go:473]     Content-Type: application/json
	I0816 10:29:12.277485    4853 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:29:12.277491    4853 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:29:12.280813    4853 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:29:12.280890    4853 retry.go:31] will retry after 778.794322ms: nodes "ha-286000-m03" not found
	I0816 10:29:13.060075    4853 round_trippers.go:463] DELETE https://192.169.0.254:8443/api/v1/nodes/ha-286000-m03
	I0816 10:29:13.060098    4853 round_trippers.go:469] Request Headers:
	I0816 10:29:13.060110    4853 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:29:13.060116    4853 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:29:13.060122    4853 round_trippers.go:473]     Content-Type: application/json
	I0816 10:29:13.063708    4853 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:29:13.063830    4853 retry.go:31] will retry after 1.177748495s: nodes "ha-286000-m03" not found
	I0816 10:29:14.242780    4853 round_trippers.go:463] DELETE https://192.169.0.254:8443/api/v1/nodes/ha-286000-m03
	I0816 10:29:14.242800    4853 round_trippers.go:469] Request Headers:
	I0816 10:29:14.242812    4853 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:29:14.242818    4853 round_trippers.go:473]     Content-Type: application/json
	I0816 10:29:14.242824    4853 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:29:14.246659    4853 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:29:14.246749    4853 retry.go:31] will retry after 2.498534308s: nodes "ha-286000-m03" not found
	I0816 10:29:16.746439    4853 round_trippers.go:463] DELETE https://192.169.0.254:8443/api/v1/nodes/ha-286000-m03
	I0816 10:29:16.746463    4853 round_trippers.go:469] Request Headers:
	I0816 10:29:16.746475    4853 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:29:16.746481    4853 round_trippers.go:473]     Content-Type: application/json
	I0816 10:29:16.746485    4853 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:29:16.749570    4853 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:29:16.749636    4853 retry.go:31] will retry after 3.324368738s: nodes "ha-286000-m03" not found
	I0816 10:29:20.074222    4853 round_trippers.go:463] DELETE https://192.169.0.254:8443/api/v1/nodes/ha-286000-m03
	I0816 10:29:20.074234    4853 round_trippers.go:469] Request Headers:
	I0816 10:29:20.074239    4853 round_trippers.go:473]     Content-Type: application/json
	I0816 10:29:20.074242    4853 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:29:20.074245    4853 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:29:20.076385    4853 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:29:20.076465    4853 retry.go:31] will retry after 2.392515447s: nodes "ha-286000-m03" not found
	I0816 10:29:22.469532    4853 round_trippers.go:463] DELETE https://192.169.0.254:8443/api/v1/nodes/ha-286000-m03
	I0816 10:29:22.469552    4853 round_trippers.go:469] Request Headers:
	I0816 10:29:22.469564    4853 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:29:22.469578    4853 round_trippers.go:473]     Content-Type: application/json
	I0816 10:29:22.469583    4853 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:29:22.473047    4853 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:29:22.473142    4853 retry.go:31] will retry after 7.953333041s: nodes "ha-286000-m03" not found
	I0816 10:29:30.427827    4853 round_trippers.go:463] DELETE https://192.169.0.254:8443/api/v1/nodes/ha-286000-m03
	I0816 10:29:30.427896    4853 round_trippers.go:469] Request Headers:
	I0816 10:29:30.427910    4853 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:29:30.427919    4853 round_trippers.go:473]     Content-Type: application/json
	I0816 10:29:30.427927    4853 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:29:30.431279    4853 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:29:30.431370    4853 retry.go:31] will retry after 11.905338047s: nodes "ha-286000-m03" not found
	I0816 10:29:42.336534    4853 round_trippers.go:463] DELETE https://192.169.0.254:8443/api/v1/nodes/ha-286000-m03
	I0816 10:29:42.336549    4853 round_trippers.go:469] Request Headers:
	I0816 10:29:42.336558    4853 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:29:42.336563    4853 round_trippers.go:473]     Content-Type: application/json
	I0816 10:29:42.336567    4853 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:29:42.339082    4853 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:29:42.339199    4853 retry.go:31] will retry after 13.886345422s: nodes "ha-286000-m03" not found
	I0816 10:29:56.226705    4853 round_trippers.go:463] DELETE https://192.169.0.254:8443/api/v1/nodes/ha-286000-m03
	I0816 10:29:56.226728    4853 round_trippers.go:469] Request Headers:
	I0816 10:29:56.226739    4853 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:29:56.226746    4853 round_trippers.go:473]     Content-Type: application/json
	I0816 10:29:56.226751    4853 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:29:56.230166    4853 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:29:56.230237    4853 retry.go:31] will retry after 9.974351392s: nodes "ha-286000-m03" not found
	I0816 10:30:06.206014    4853 round_trippers.go:463] DELETE https://192.169.0.254:8443/api/v1/nodes/ha-286000-m03
	I0816 10:30:06.206029    4853 round_trippers.go:469] Request Headers:
	I0816 10:30:06.206037    4853 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:30:06.206042    4853 round_trippers.go:473]     Content-Type: application/json
	I0816 10:30:06.206045    4853 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:30:06.208621    4853 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:30:06.208717    4853 retry.go:31] will retry after 36.757803109s: nodes "ha-286000-m03" not found
	I0816 10:30:42.966191    4853 round_trippers.go:463] DELETE https://192.169.0.254:8443/api/v1/nodes/ha-286000-m03
	I0816 10:30:42.966212    4853 round_trippers.go:469] Request Headers:
	I0816 10:30:42.966224    4853 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:30:42.966230    4853 round_trippers.go:473]     Content-Type: application/json
	I0816 10:30:42.966235    4853 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:30:42.969696    4853 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	E0816 10:30:42.969819    4853 node.go:177] kubectl delete node "ha-286000-m03" failed: nodes "ha-286000-m03" not found
	I0816 10:30:43.009288    4853 out.go:201] 
	W0816 10:30:43.061391    4853 out.go:270] X Exiting due to GUEST_NODE_DELETE: deleting node: nodes "ha-286000-m03" not found
	X Exiting due to GUEST_NODE_DELETE: deleting node: nodes "ha-286000-m03" not found
	W0816 10:30:43.061417    4853 out.go:270] * 
	* 
	W0816 10:30:43.065262    4853 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_node_494011a6b05fec7d81170870a2aee2ef446d16a4_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_node_494011a6b05fec7d81170870a2aee2ef446d16a4_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 10:30:43.086163    4853 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-amd64 -p ha-286000 node delete m03 -v=7 --alsologtostderr": exit status 80
ha_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 -p ha-286000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-286000 status -v=7 --alsologtostderr: exit status 7 (391.198162ms)

                                                
                                                
-- stdout --
	ha-286000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-286000-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-286000-m03
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-286000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 10:30:43.168659    5990 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:30:43.168889    5990 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:30:43.168894    5990 out.go:358] Setting ErrFile to fd 2...
	I0816 10:30:43.168898    5990 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:30:43.169093    5990 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1276/.minikube/bin
	I0816 10:30:43.169292    5990 out.go:352] Setting JSON to false
	I0816 10:30:43.169313    5990 mustload.go:65] Loading cluster: ha-286000
	I0816 10:30:43.169356    5990 notify.go:220] Checking for updates...
	I0816 10:30:43.169650    5990 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:30:43.169668    5990 status.go:255] checking status of ha-286000 ...
	I0816 10:30:43.170032    5990 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:30:43.170080    5990 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:30:43.179260    5990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52306
	I0816 10:30:43.179618    5990 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:30:43.180045    5990 main.go:141] libmachine: Using API Version  1
	I0816 10:30:43.180078    5990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:30:43.180276    5990 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:30:43.180388    5990 main.go:141] libmachine: (ha-286000) Calling .GetState
	I0816 10:30:43.180469    5990 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:30:43.180543    5990 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 4669
	I0816 10:30:43.181552    5990 status.go:330] ha-286000 host status = "Running" (err=<nil>)
	I0816 10:30:43.181572    5990 host.go:66] Checking if "ha-286000" exists ...
	I0816 10:30:43.181802    5990 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:30:43.181828    5990 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:30:43.190330    5990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52308
	I0816 10:30:43.190636    5990 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:30:43.190948    5990 main.go:141] libmachine: Using API Version  1
	I0816 10:30:43.190958    5990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:30:43.191237    5990 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:30:43.191362    5990 main.go:141] libmachine: (ha-286000) Calling .GetIP
	I0816 10:30:43.191445    5990 host.go:66] Checking if "ha-286000" exists ...
	I0816 10:30:43.191704    5990 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:30:43.191734    5990 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:30:43.204724    5990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52310
	I0816 10:30:43.205109    5990 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:30:43.205440    5990 main.go:141] libmachine: Using API Version  1
	I0816 10:30:43.205453    5990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:30:43.205663    5990 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:30:43.205767    5990 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:30:43.205894    5990 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 10:30:43.205914    5990 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:30:43.205991    5990 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:30:43.206070    5990 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:30:43.206143    5990 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:30:43.206226    5990 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:30:43.242036    5990 ssh_runner.go:195] Run: systemctl --version
	I0816 10:30:43.249762    5990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 10:30:43.261713    5990 kubeconfig.go:125] found "ha-286000" server: "https://192.169.0.254:8443"
	I0816 10:30:43.261737    5990 api_server.go:166] Checking apiserver status ...
	I0816 10:30:43.261780    5990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 10:30:43.273639    5990 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2047/cgroup
	W0816 10:30:43.282690    5990 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2047/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0816 10:30:43.282746    5990 ssh_runner.go:195] Run: ls
	I0816 10:30:43.288064    5990 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0816 10:30:43.292207    5990 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0816 10:30:43.292219    5990 status.go:422] ha-286000 apiserver status = Running (err=<nil>)
	I0816 10:30:43.292228    5990 status.go:257] ha-286000 status: &{Name:ha-286000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 10:30:43.292240    5990 status.go:255] checking status of ha-286000-m02 ...
	I0816 10:30:43.292513    5990 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:30:43.292533    5990 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:30:43.301474    5990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52314
	I0816 10:30:43.301816    5990 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:30:43.302157    5990 main.go:141] libmachine: Using API Version  1
	I0816 10:30:43.302175    5990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:30:43.302401    5990 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:30:43.302507    5990 main.go:141] libmachine: (ha-286000-m02) Calling .GetState
	I0816 10:30:43.302587    5990 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:30:43.302675    5990 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid from json: 4678
	I0816 10:30:43.303639    5990 status.go:330] ha-286000-m02 host status = "Running" (err=<nil>)
	I0816 10:30:43.303649    5990 host.go:66] Checking if "ha-286000-m02" exists ...
	I0816 10:30:43.303890    5990 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:30:43.303911    5990 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:30:43.312464    5990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52316
	I0816 10:30:43.312806    5990 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:30:43.313174    5990 main.go:141] libmachine: Using API Version  1
	I0816 10:30:43.313192    5990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:30:43.313415    5990 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:30:43.313524    5990 main.go:141] libmachine: (ha-286000-m02) Calling .GetIP
	I0816 10:30:43.313602    5990 host.go:66] Checking if "ha-286000-m02" exists ...
	I0816 10:30:43.313871    5990 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:30:43.313896    5990 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:30:43.322573    5990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52318
	I0816 10:30:43.322910    5990 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:30:43.323275    5990 main.go:141] libmachine: Using API Version  1
	I0816 10:30:43.323292    5990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:30:43.323490    5990 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:30:43.323579    5990 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:30:43.323710    5990 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 10:30:43.323723    5990 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:30:43.323805    5990 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:30:43.323897    5990 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:30:43.323979    5990 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:30:43.324065    5990 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/id_rsa Username:docker}
	I0816 10:30:43.363797    5990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 10:30:43.376105    5990 kubeconfig.go:125] found "ha-286000" server: "https://192.169.0.254:8443"
	I0816 10:30:43.376119    5990 api_server.go:166] Checking apiserver status ...
	I0816 10:30:43.376162    5990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 10:30:43.388583    5990 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2099/cgroup
	W0816 10:30:43.397182    5990 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2099/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0816 10:30:43.397245    5990 ssh_runner.go:195] Run: ls
	I0816 10:30:43.400590    5990 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0816 10:30:43.403663    5990 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0816 10:30:43.403673    5990 status.go:422] ha-286000-m02 apiserver status = Running (err=<nil>)
	I0816 10:30:43.403681    5990 status.go:257] ha-286000-m02 status: &{Name:ha-286000-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 10:30:43.403691    5990 status.go:255] checking status of ha-286000-m03 ...
	I0816 10:30:43.403961    5990 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:30:43.403980    5990 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:30:43.412670    5990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52322
	I0816 10:30:43.413037    5990 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:30:43.413386    5990 main.go:141] libmachine: Using API Version  1
	I0816 10:30:43.413404    5990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:30:43.413627    5990 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:30:43.413742    5990 main.go:141] libmachine: (ha-286000-m03) Calling .GetState
	I0816 10:30:43.413817    5990 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:30:43.413901    5990 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid from json: 4694
	I0816 10:30:43.414897    5990 status.go:330] ha-286000-m03 host status = "Running" (err=<nil>)
	I0816 10:30:43.414909    5990 host.go:66] Checking if "ha-286000-m03" exists ...
	I0816 10:30:43.415183    5990 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:30:43.415208    5990 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:30:43.423738    5990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52324
	I0816 10:30:43.424066    5990 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:30:43.424432    5990 main.go:141] libmachine: Using API Version  1
	I0816 10:30:43.424451    5990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:30:43.424685    5990 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:30:43.424798    5990 main.go:141] libmachine: (ha-286000-m03) Calling .GetIP
	I0816 10:30:43.424895    5990 host.go:66] Checking if "ha-286000-m03" exists ...
	I0816 10:30:43.425161    5990 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:30:43.425183    5990 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:30:43.433716    5990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52326
	I0816 10:30:43.434051    5990 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:30:43.434381    5990 main.go:141] libmachine: Using API Version  1
	I0816 10:30:43.434396    5990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:30:43.434599    5990 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:30:43.434702    5990 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:30:43.434824    5990 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 10:30:43.434834    5990 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:30:43.434918    5990 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:30:43.435001    5990 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:30:43.435085    5990 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:30:43.435155    5990 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/id_rsa Username:docker}
	I0816 10:30:43.470345    5990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 10:30:43.481025    5990 kubeconfig.go:125] found "ha-286000" server: "https://192.169.0.254:8443"
	I0816 10:30:43.481038    5990 api_server.go:166] Checking apiserver status ...
	I0816 10:30:43.481079    5990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 10:30:43.490722    5990 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 10:30:43.490735    5990 status.go:422] ha-286000-m03 apiserver status = Stopped (err=<nil>)
	I0816 10:30:43.490743    5990 status.go:257] ha-286000-m03 status: &{Name:ha-286000-m03 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 10:30:43.490752    5990 status.go:255] checking status of ha-286000-m04 ...
	I0816 10:30:43.491036    5990 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:30:43.491066    5990 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:30:43.499774    5990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52329
	I0816 10:30:43.500102    5990 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:30:43.500456    5990 main.go:141] libmachine: Using API Version  1
	I0816 10:30:43.500478    5990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:30:43.500704    5990 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:30:43.500817    5990 main.go:141] libmachine: (ha-286000-m04) Calling .GetState
	I0816 10:30:43.500895    5990 main.go:141] libmachine: (ha-286000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:30:43.500990    5990 main.go:141] libmachine: (ha-286000-m04) DBG | hyperkit pid from json: 4248
	I0816 10:30:43.501968    5990 main.go:141] libmachine: (ha-286000-m04) DBG | hyperkit pid 4248 missing from process table
	I0816 10:30:43.501991    5990 status.go:330] ha-286000-m04 host status = "Stopped" (err=<nil>)
	I0816 10:30:43.501997    5990 status.go:343] host is not running, skipping remaining checks
	I0816 10:30:43.502004    5990 status.go:257] ha-286000-m04 status: &{Name:ha-286000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-amd64 -p ha-286000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-286000 -n ha-286000
helpers_test.go:244: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-286000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-286000 logs -n 25: (3.391329618s)
helpers_test.go:252: TestMultiControlPlane/serial/DeleteSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| kubectl | -p ha-286000 -- get pods -o          | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- get pods -o          | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT |                     |
	|         | busybox-7dff88458-99xmp --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-dvmvk --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-k9m92 --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT |                     |
	|         | busybox-7dff88458-99xmp --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-dvmvk --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-k9m92 --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT |                     |
	|         | busybox-7dff88458-99xmp -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-dvmvk -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-k9m92 -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- get pods -o          | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT |                     |
	|         | busybox-7dff88458-99xmp              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-dvmvk              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-dvmvk -- sh        |           |         |         |                     |                     |
	|         | -c ping -c 1 192.169.0.1             |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-k9m92              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-k9m92 -- sh        |           |         |         |                     |                     |
	|         | -c ping -c 1 192.169.0.1             |           |         |         |                     |                     |
	| node    | add -p ha-286000 -v=7                | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:17 PDT | 16 Aug 24 10:17 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | ha-286000 node stop m02 -v=7         | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:17 PDT | 16 Aug 24 10:18 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | ha-286000 node start m02 -v=7        | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:20 PDT | 16 Aug 24 10:22 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | list -p ha-286000 -v=7               | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:22 PDT |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| stop    | -p ha-286000 -v=7                    | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:22 PDT | 16 Aug 24 10:23 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| start   | -p ha-286000 --wait=true -v=7        | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:23 PDT |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | list -p ha-286000                    | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:29 PDT |                     |
	| node    | ha-286000 node delete m03 -v=7       | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:29 PDT |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 10:23:31
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 10:23:31.430615    4656 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:23:31.431053    4656 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:23:31.431060    4656 out.go:358] Setting ErrFile to fd 2...
	I0816 10:23:31.431065    4656 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:23:31.431301    4656 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1276/.minikube/bin
	I0816 10:23:31.432961    4656 out.go:352] Setting JSON to false
	I0816 10:23:31.457337    4656 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":3181,"bootTime":1723825830,"procs":437,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0816 10:23:31.457435    4656 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 10:23:31.479716    4656 out.go:177] * [ha-286000] minikube v1.33.1 on Darwin 14.6.1
	I0816 10:23:31.522521    4656 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 10:23:31.522577    4656 notify.go:220] Checking for updates...
	I0816 10:23:31.567096    4656 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1276/kubeconfig
	I0816 10:23:31.588384    4656 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0816 10:23:31.609442    4656 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 10:23:31.630204    4656 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1276/.minikube
	I0816 10:23:31.651227    4656 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 10:23:31.673167    4656 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:23:31.673335    4656 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 10:23:31.674026    4656 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:23:31.674118    4656 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:23:31.683709    4656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52161
	I0816 10:23:31.684063    4656 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:23:31.684452    4656 main.go:141] libmachine: Using API Version  1
	I0816 10:23:31.684463    4656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:23:31.684744    4656 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:23:31.684873    4656 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:23:31.714156    4656 out.go:177] * Using the hyperkit driver based on existing profile
	I0816 10:23:31.756393    4656 start.go:297] selected driver: hyperkit
	I0816 10:23:31.756421    4656 start.go:901] validating driver "hyperkit" against &{Name:ha-286000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:ha-286000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:fals
e efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 10:23:31.756672    4656 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 10:23:31.756879    4656 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:23:31.757097    4656 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19461-1276/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0816 10:23:31.766849    4656 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0816 10:23:31.772699    4656 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:23:31.772722    4656 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0816 10:23:31.776315    4656 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 10:23:31.776385    4656 cni.go:84] Creating CNI manager for ""
	I0816 10:23:31.776395    4656 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0816 10:23:31.776475    4656 start.go:340] cluster config:
	{Name:ha-286000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-286000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-t
iller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 10:23:31.776573    4656 iso.go:125] acquiring lock: {Name:mkb430b43b5e7a8a683454482519837ed996c78d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:23:31.798308    4656 out.go:177] * Starting "ha-286000" primary control-plane node in "ha-286000" cluster
	I0816 10:23:31.820262    4656 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 10:23:31.820333    4656 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0816 10:23:31.820361    4656 cache.go:56] Caching tarball of preloaded images
	I0816 10:23:31.820552    4656 preload.go:172] Found /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0816 10:23:31.820569    4656 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 10:23:31.820757    4656 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:23:31.821672    4656 start.go:360] acquireMachinesLock for ha-286000: {Name:mk0510f0432b1e24fd07d899155e85dff8756d5a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:23:31.821789    4656 start.go:364] duration metric: took 93.411µs to acquireMachinesLock for "ha-286000"
	I0816 10:23:31.821826    4656 start.go:96] Skipping create...Using existing machine configuration
	I0816 10:23:31.821843    4656 fix.go:54] fixHost starting: 
	I0816 10:23:31.822296    4656 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:23:31.822326    4656 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:23:31.831598    4656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52163
	I0816 10:23:31.831979    4656 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:23:31.832360    4656 main.go:141] libmachine: Using API Version  1
	I0816 10:23:31.832373    4656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:23:31.832622    4656 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:23:31.832766    4656 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:23:31.832876    4656 main.go:141] libmachine: (ha-286000) Calling .GetState
	I0816 10:23:31.832983    4656 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:23:31.833087    4656 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:23:31.834009    4656 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid 3771 missing from process table
	I0816 10:23:31.834044    4656 fix.go:112] recreateIfNeeded on ha-286000: state=Stopped err=<nil>
	I0816 10:23:31.834061    4656 main.go:141] libmachine: (ha-286000) Calling .DriverName
	W0816 10:23:31.834156    4656 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 10:23:31.892140    4656 out.go:177] * Restarting existing hyperkit VM for "ha-286000" ...
	I0816 10:23:31.931475    4656 main.go:141] libmachine: (ha-286000) Calling .Start
	I0816 10:23:31.931796    4656 main.go:141] libmachine: (ha-286000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/hyperkit.pid
	I0816 10:23:31.931814    4656 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:23:31.933360    4656 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid 3771 missing from process table
	I0816 10:23:31.933379    4656 main.go:141] libmachine: (ha-286000) DBG | pid 3771 is in state "Stopped"
	I0816 10:23:31.933400    4656 main.go:141] libmachine: (ha-286000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/hyperkit.pid...
	I0816 10:23:31.934010    4656 main.go:141] libmachine: (ha-286000) DBG | Using UUID ad96de67-e238-408c-89eb-d74e5b68d297
	I0816 10:23:32.043909    4656 main.go:141] libmachine: (ha-286000) DBG | Generated MAC 66:c8:48:4e:12:1b
	I0816 10:23:32.043928    4656 main.go:141] libmachine: (ha-286000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000
	I0816 10:23:32.044052    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:32 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"ad96de67-e238-408c-89eb-d74e5b68d297", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003a89c0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 10:23:32.044084    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:32 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"ad96de67-e238-408c-89eb-d74e5b68d297", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003a89c0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 10:23:32.044134    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:32 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "ad96de67-e238-408c-89eb-d74e5b68d297", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/ha-286000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/bzimage,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000"}
	I0816 10:23:32.044180    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:32 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U ad96de67-e238-408c-89eb-d74e5b68d297 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/ha-286000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/console-ring -f kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/bzimage,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000"
	I0816 10:23:32.044192    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:32 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0816 10:23:32.045646    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:32 DEBUG: hyperkit: Pid is 4669
	I0816 10:23:32.046030    4656 main.go:141] libmachine: (ha-286000) DBG | Attempt 0
	I0816 10:23:32.046046    4656 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:23:32.046146    4656 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 4669
	I0816 10:23:32.048140    4656 main.go:141] libmachine: (ha-286000) DBG | Searching for 66:c8:48:4e:12:1b in /var/db/dhcpd_leases ...
	I0816 10:23:32.048193    4656 main.go:141] libmachine: (ha-286000) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0816 10:23:32.048231    4656 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dbd7}
	I0816 10:23:32.048249    4656 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:23:32.048272    4656 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66c0d7f9}
	I0816 10:23:32.048286    4656 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:23:32.048293    4656 main.go:141] libmachine: (ha-286000) DBG | Found match: 66:c8:48:4e:12:1b
	I0816 10:23:32.048301    4656 main.go:141] libmachine: (ha-286000) DBG | IP: 192.169.0.5
	I0816 10:23:32.048382    4656 main.go:141] libmachine: (ha-286000) Calling .GetConfigRaw
	I0816 10:23:32.049597    4656 main.go:141] libmachine: (ha-286000) Calling .GetIP
	I0816 10:23:32.049816    4656 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:23:32.050246    4656 machine.go:93] provisionDockerMachine start ...
	I0816 10:23:32.050258    4656 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:23:32.050395    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:23:32.050512    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:23:32.050602    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:23:32.050694    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:23:32.050788    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:23:32.050933    4656 main.go:141] libmachine: Using SSH client type: native
	I0816 10:23:32.051148    4656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e87ea0] 0x3e8ac00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:23:32.051157    4656 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 10:23:32.053822    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:32 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0816 10:23:32.105618    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:32 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0816 10:23:32.106644    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:23:32.106664    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:23:32.106672    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:23:32.106681    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:23:32.488273    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:32 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0816 10:23:32.488286    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:32 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0816 10:23:32.602925    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:23:32.602945    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:23:32.602968    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:23:32.603003    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:23:32.603842    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:32 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0816 10:23:32.603853    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:32 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0816 10:23:38.196809    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:38 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0816 10:23:38.196887    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:38 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0816 10:23:38.196898    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:38 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0816 10:23:38.223115    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:38 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0816 10:23:43.125906    4656 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 10:23:43.125920    4656 main.go:141] libmachine: (ha-286000) Calling .GetMachineName
	I0816 10:23:43.126080    4656 buildroot.go:166] provisioning hostname "ha-286000"
	I0816 10:23:43.126090    4656 main.go:141] libmachine: (ha-286000) Calling .GetMachineName
	I0816 10:23:43.126193    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:23:43.126289    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:23:43.126427    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:23:43.126532    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:23:43.126633    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:23:43.126763    4656 main.go:141] libmachine: Using SSH client type: native
	I0816 10:23:43.126897    4656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e87ea0] 0x3e8ac00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:23:43.126905    4656 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-286000 && echo "ha-286000" | sudo tee /etc/hostname
	I0816 10:23:43.200672    4656 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-286000
	
	I0816 10:23:43.200691    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:23:43.200824    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:23:43.200934    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:23:43.201035    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:23:43.201146    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:23:43.201266    4656 main.go:141] libmachine: Using SSH client type: native
	I0816 10:23:43.201423    4656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e87ea0] 0x3e8ac00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:23:43.201434    4656 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-286000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-286000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-286000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 10:23:43.272382    4656 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 10:23:43.272403    4656 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19461-1276/.minikube CaCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19461-1276/.minikube}
	I0816 10:23:43.272418    4656 buildroot.go:174] setting up certificates
	I0816 10:23:43.272432    4656 provision.go:84] configureAuth start
	I0816 10:23:43.272440    4656 main.go:141] libmachine: (ha-286000) Calling .GetMachineName
	I0816 10:23:43.272576    4656 main.go:141] libmachine: (ha-286000) Calling .GetIP
	I0816 10:23:43.272680    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:23:43.272769    4656 provision.go:143] copyHostCerts
	I0816 10:23:43.272801    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem
	I0816 10:23:43.272890    4656 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem, removing ...
	I0816 10:23:43.272898    4656 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem
	I0816 10:23:43.273149    4656 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem (1078 bytes)
	I0816 10:23:43.273406    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem
	I0816 10:23:43.273447    4656 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem, removing ...
	I0816 10:23:43.273452    4656 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem
	I0816 10:23:43.273542    4656 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem (1123 bytes)
	I0816 10:23:43.273700    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem
	I0816 10:23:43.273746    4656 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem, removing ...
	I0816 10:23:43.273751    4656 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem
	I0816 10:23:43.273833    4656 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem (1679 bytes)
	I0816 10:23:43.274002    4656 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem org=jenkins.ha-286000 san=[127.0.0.1 192.169.0.5 ha-286000 localhost minikube]
	I0816 10:23:43.350973    4656 provision.go:177] copyRemoteCerts
	I0816 10:23:43.351030    4656 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 10:23:43.351047    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:23:43.351198    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:23:43.351290    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:23:43.351418    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:23:43.351516    4656 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:23:43.390290    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0816 10:23:43.390367    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0816 10:23:43.409250    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0816 10:23:43.409310    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0816 10:23:43.428428    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0816 10:23:43.428486    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 10:23:43.447295    4656 provision.go:87] duration metric: took 174.931658ms to configureAuth
	I0816 10:23:43.447308    4656 buildroot.go:189] setting minikube options for container-runtime
	I0816 10:23:43.447492    4656 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:23:43.447506    4656 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:23:43.447636    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:23:43.447734    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:23:43.447819    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:23:43.447898    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:23:43.447976    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:23:43.448093    4656 main.go:141] libmachine: Using SSH client type: native
	I0816 10:23:43.448217    4656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e87ea0] 0x3e8ac00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:23:43.448225    4656 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0816 10:23:43.510056    4656 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0816 10:23:43.510072    4656 buildroot.go:70] root file system type: tmpfs
	I0816 10:23:43.510138    4656 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0816 10:23:43.510152    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:23:43.510280    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:23:43.510367    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:23:43.510466    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:23:43.510546    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:23:43.510704    4656 main.go:141] libmachine: Using SSH client type: native
	I0816 10:23:43.510847    4656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e87ea0] 0x3e8ac00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:23:43.510894    4656 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0816 10:23:43.585463    4656 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0816 10:23:43.585485    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:23:43.585612    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:23:43.585708    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:23:43.585797    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:23:43.585881    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:23:43.585994    4656 main.go:141] libmachine: Using SSH client type: native
	I0816 10:23:43.586142    4656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e87ea0] 0x3e8ac00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:23:43.586155    4656 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0816 10:23:45.281245    4656 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0816 10:23:45.281272    4656 machine.go:96] duration metric: took 13.233954511s to provisionDockerMachine
	I0816 10:23:45.281282    4656 start.go:293] postStartSetup for "ha-286000" (driver="hyperkit")
	I0816 10:23:45.281290    4656 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 10:23:45.281301    4656 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:23:45.281477    4656 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 10:23:45.281497    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:23:45.281579    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:23:45.281672    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:23:45.281756    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:23:45.281830    4656 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:23:45.322349    4656 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 10:23:45.325873    4656 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 10:23:45.325888    4656 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19461-1276/.minikube/addons for local assets ...
	I0816 10:23:45.326003    4656 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19461-1276/.minikube/files for local assets ...
	I0816 10:23:45.326184    4656 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> 18312.pem in /etc/ssl/certs
	I0816 10:23:45.326190    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> /etc/ssl/certs/18312.pem
	I0816 10:23:45.326400    4656 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 10:23:45.335377    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem --> /etc/ssl/certs/18312.pem (1708 bytes)
	I0816 10:23:45.364973    4656 start.go:296] duration metric: took 83.714414ms for postStartSetup
	I0816 10:23:45.365002    4656 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:23:45.365179    4656 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0816 10:23:45.365192    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:23:45.365284    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:23:45.365363    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:23:45.365463    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:23:45.365567    4656 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:23:45.403540    4656 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0816 10:23:45.403604    4656 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0816 10:23:45.456725    4656 fix.go:56] duration metric: took 13.637911557s for fixHost
	I0816 10:23:45.456746    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:23:45.456881    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:23:45.456970    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:23:45.457077    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:23:45.457170    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:23:45.457308    4656 main.go:141] libmachine: Using SSH client type: native
	I0816 10:23:45.457449    4656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e87ea0] 0x3e8ac00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:23:45.457456    4656 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 10:23:45.520497    4656 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723829025.657632114
	
	I0816 10:23:45.520510    4656 fix.go:216] guest clock: 1723829025.657632114
	I0816 10:23:45.520516    4656 fix.go:229] Guest: 2024-08-16 10:23:45.657632114 -0700 PDT Remote: 2024-08-16 10:23:45.456737 -0700 PDT m=+14.070866227 (delta=200.895114ms)
	I0816 10:23:45.520533    4656 fix.go:200] guest clock delta is within tolerance: 200.895114ms
	I0816 10:23:45.520536    4656 start.go:83] releasing machines lock for "ha-286000", held for 13.701786252s
	I0816 10:23:45.520558    4656 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:23:45.520685    4656 main.go:141] libmachine: (ha-286000) Calling .GetIP
	I0816 10:23:45.520780    4656 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:23:45.521071    4656 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:23:45.521183    4656 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:23:45.521258    4656 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 10:23:45.521295    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:23:45.521314    4656 ssh_runner.go:195] Run: cat /version.json
	I0816 10:23:45.521325    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:23:45.521385    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:23:45.521413    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:23:45.521478    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:23:45.521492    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:23:45.521569    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:23:45.521588    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:23:45.521684    4656 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:23:45.521698    4656 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:23:45.608738    4656 ssh_runner.go:195] Run: systemctl --version
	I0816 10:23:45.613819    4656 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 10:23:45.618009    4656 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 10:23:45.618054    4656 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 10:23:45.630928    4656 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 10:23:45.630940    4656 start.go:495] detecting cgroup driver to use...
	I0816 10:23:45.631050    4656 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 10:23:45.647297    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0816 10:23:45.656185    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0816 10:23:45.664870    4656 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0816 10:23:45.664909    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0816 10:23:45.673735    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 10:23:45.682541    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0816 10:23:45.691093    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 10:23:45.699692    4656 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 10:23:45.708389    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0816 10:23:45.717214    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0816 10:23:45.726031    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0816 10:23:45.734772    4656 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 10:23:45.742525    4656 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 10:23:45.750474    4656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:23:45.857037    4656 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0816 10:23:45.876038    4656 start.go:495] detecting cgroup driver to use...
	I0816 10:23:45.876115    4656 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0816 10:23:45.891371    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 10:23:45.904769    4656 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 10:23:45.925222    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 10:23:45.935653    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 10:23:45.946111    4656 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0816 10:23:45.966114    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 10:23:45.976753    4656 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 10:23:45.991951    4656 ssh_runner.go:195] Run: which cri-dockerd
	I0816 10:23:45.995087    4656 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0816 10:23:46.002262    4656 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0816 10:23:46.015662    4656 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0816 10:23:46.113010    4656 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0816 10:23:46.220102    4656 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0816 10:23:46.220181    4656 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0816 10:23:46.234448    4656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:23:46.327392    4656 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0816 10:23:48.670555    4656 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.343962753s)
	I0816 10:23:48.670612    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0816 10:23:48.681270    4656 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0816 10:23:48.694180    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0816 10:23:48.704525    4656 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0816 10:23:48.796386    4656 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0816 10:23:48.896301    4656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:23:49.015732    4656 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0816 10:23:49.029308    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0816 10:23:49.039437    4656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:23:49.133284    4656 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0816 10:23:49.196413    4656 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0816 10:23:49.196492    4656 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0816 10:23:49.200987    4656 start.go:563] Will wait 60s for crictl version
	I0816 10:23:49.201034    4656 ssh_runner.go:195] Run: which crictl
	I0816 10:23:49.204272    4656 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 10:23:49.229772    4656 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0816 10:23:49.229851    4656 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0816 10:23:49.247799    4656 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0816 10:23:49.310834    4656 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0816 10:23:49.310884    4656 main.go:141] libmachine: (ha-286000) Calling .GetIP
	I0816 10:23:49.311324    4656 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0816 10:23:49.315940    4656 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 10:23:49.325830    4656 kubeadm.go:883] updating cluster {Name:ha-286000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:ha-286000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false f
reshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 10:23:49.325921    4656 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 10:23:49.325979    4656 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0816 10:23:49.344604    4656 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0816 10:23:49.344616    4656 docker.go:615] Images already preloaded, skipping extraction
	I0816 10:23:49.344689    4656 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0816 10:23:49.358019    4656 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0816 10:23:49.358039    4656 cache_images.go:84] Images are preloaded, skipping loading
	I0816 10:23:49.358049    4656 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.0 docker true true} ...
	I0816 10:23:49.358133    4656 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-286000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-286000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 10:23:49.358200    4656 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0816 10:23:49.396733    4656 cni.go:84] Creating CNI manager for ""
	I0816 10:23:49.396746    4656 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0816 10:23:49.396758    4656 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 10:23:49.396773    4656 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-286000 NodeName:ha-286000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 10:23:49.396858    4656 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-286000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 10:23:49.396876    4656 kube-vip.go:115] generating kube-vip config ...
	I0816 10:23:49.396930    4656 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0816 10:23:49.409760    4656 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0816 10:23:49.409827    4656 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0816 10:23:49.409880    4656 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 10:23:49.417741    4656 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 10:23:49.417784    4656 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0816 10:23:49.425178    4656 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0816 10:23:49.438709    4656 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 10:23:49.451834    4656 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0816 10:23:49.465615    4656 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0816 10:23:49.478992    4656 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0816 10:23:49.481872    4656 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 10:23:49.491581    4656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:23:49.591270    4656 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 10:23:49.605166    4656 certs.go:68] Setting up /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000 for IP: 192.169.0.5
	I0816 10:23:49.605178    4656 certs.go:194] generating shared ca certs ...
	I0816 10:23:49.605204    4656 certs.go:226] acquiring lock for ca certs: {Name:mka549fbc467849834d2267da204028c49d88a3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:23:49.605373    4656 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key
	I0816 10:23:49.605447    4656 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key
	I0816 10:23:49.605458    4656 certs.go:256] generating profile certs ...
	I0816 10:23:49.605548    4656 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.key
	I0816 10:23:49.605569    4656 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.1abcce66
	I0816 10:23:49.605590    4656 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.1abcce66 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.7 192.169.0.254]
	I0816 10:23:49.872724    4656 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.1abcce66 ...
	I0816 10:23:49.872746    4656 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.1abcce66: {Name:mk52a3c288948ed76c5e0c3d52d6b4bf6d85dac4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:23:49.873234    4656 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.1abcce66 ...
	I0816 10:23:49.873246    4656 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.1abcce66: {Name:mk4d6d8f8e53e86a8e5b1aff2a47e28c9af375aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:23:49.873462    4656 certs.go:381] copying /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.1abcce66 -> /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt
	I0816 10:23:49.873670    4656 certs.go:385] copying /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.1abcce66 -> /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key
	I0816 10:23:49.873917    4656 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key
	I0816 10:23:49.873927    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0816 10:23:49.873950    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0816 10:23:49.873969    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0816 10:23:49.873988    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0816 10:23:49.874005    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0816 10:23:49.874022    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0816 10:23:49.874039    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0816 10:23:49.874056    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0816 10:23:49.874155    4656 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem (1338 bytes)
	W0816 10:23:49.874204    4656 certs.go:480] ignoring /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831_empty.pem, impossibly tiny 0 bytes
	I0816 10:23:49.874213    4656 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 10:23:49.874243    4656 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem (1078 bytes)
	I0816 10:23:49.874272    4656 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem (1123 bytes)
	I0816 10:23:49.874303    4656 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem (1679 bytes)
	I0816 10:23:49.874365    4656 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem (1708 bytes)
	I0816 10:23:49.874404    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem -> /usr/share/ca-certificates/1831.pem
	I0816 10:23:49.874426    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> /usr/share/ca-certificates/18312.pem
	I0816 10:23:49.874445    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:23:49.874951    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 10:23:49.894591    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0816 10:23:49.949362    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 10:23:50.001129    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 10:23:50.031447    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0816 10:23:50.051861    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 10:23:50.072126    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 10:23:50.092020    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 10:23:50.111735    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem --> /usr/share/ca-certificates/1831.pem (1338 bytes)
	I0816 10:23:50.131448    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem --> /usr/share/ca-certificates/18312.pem (1708 bytes)
	I0816 10:23:50.150204    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 10:23:50.170431    4656 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 10:23:50.183792    4656 ssh_runner.go:195] Run: openssl version
	I0816 10:23:50.188069    4656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18312.pem && ln -fs /usr/share/ca-certificates/18312.pem /etc/ssl/certs/18312.pem"
	I0816 10:23:50.196462    4656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18312.pem
	I0816 10:23:50.199930    4656 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 16:57 /usr/share/ca-certificates/18312.pem
	I0816 10:23:50.199966    4656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18312.pem
	I0816 10:23:50.204340    4656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18312.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 10:23:50.212595    4656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 10:23:50.220934    4656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:23:50.224472    4656 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 16:48 /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:23:50.224507    4656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:23:50.228762    4656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 10:23:50.237224    4656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1831.pem && ln -fs /usr/share/ca-certificates/1831.pem /etc/ssl/certs/1831.pem"
	I0816 10:23:50.245558    4656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1831.pem
	I0816 10:23:50.249052    4656 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 16:57 /usr/share/ca-certificates/1831.pem
	I0816 10:23:50.249090    4656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1831.pem
	I0816 10:23:50.253505    4656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1831.pem /etc/ssl/certs/51391683.0"
	I0816 10:23:50.261784    4656 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 10:23:50.265339    4656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 10:23:50.269761    4656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 10:23:50.273967    4656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 10:23:50.278404    4656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 10:23:50.282734    4656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 10:23:50.286959    4656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 10:23:50.291328    4656 kubeadm.go:392] StartCluster: {Name:ha-286000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:ha-286000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 10:23:50.291439    4656 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0816 10:23:50.308917    4656 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 10:23:50.316477    4656 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 10:23:50.316487    4656 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 10:23:50.316521    4656 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 10:23:50.324768    4656 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 10:23:50.325077    4656 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-286000" does not appear in /Users/jenkins/minikube-integration/19461-1276/kubeconfig
	I0816 10:23:50.325160    4656 kubeconfig.go:62] /Users/jenkins/minikube-integration/19461-1276/kubeconfig needs updating (will repair): [kubeconfig missing "ha-286000" cluster setting kubeconfig missing "ha-286000" context setting]
	I0816 10:23:50.325346    4656 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/kubeconfig: {Name:mk7f4491c8ec5cf96c96a5a1a5dbfba4301394a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:23:50.325844    4656 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19461-1276/kubeconfig
	I0816 10:23:50.326042    4656 kapi.go:59] client config for ha-286000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.key", CAFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x5540f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0816 10:23:50.326340    4656 cert_rotation.go:140] Starting client certificate rotation controller
	I0816 10:23:50.326539    4656 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 10:23:50.333744    4656 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.5
	I0816 10:23:50.333758    4656 kubeadm.go:597] duration metric: took 17.27164ms to restartPrimaryControlPlane
	I0816 10:23:50.333763    4656 kubeadm.go:394] duration metric: took 42.452811ms to StartCluster
	I0816 10:23:50.333775    4656 settings.go:142] acquiring lock: {Name:mk60e377244eac756871b74b6bd45115654652a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:23:50.333847    4656 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19461-1276/kubeconfig
	I0816 10:23:50.334196    4656 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/kubeconfig: {Name:mk7f4491c8ec5cf96c96a5a1a5dbfba4301394a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:23:50.334417    4656 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:23:50.334430    4656 start.go:241] waiting for startup goroutines ...
	I0816 10:23:50.334436    4656 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 10:23:50.334546    4656 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:23:50.378007    4656 out.go:177] * Enabled addons: 
	I0816 10:23:50.399051    4656 addons.go:510] duration metric: took 64.628768ms for enable addons: enabled=[]
	I0816 10:23:50.399122    4656 start.go:246] waiting for cluster config update ...
	I0816 10:23:50.399134    4656 start.go:255] writing updated cluster config ...
	I0816 10:23:50.421150    4656 out.go:201] 
	I0816 10:23:50.443594    4656 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:23:50.443722    4656 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:23:50.466091    4656 out.go:177] * Starting "ha-286000-m02" control-plane node in "ha-286000" cluster
	I0816 10:23:50.507896    4656 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 10:23:50.507978    4656 cache.go:56] Caching tarball of preloaded images
	I0816 10:23:50.508166    4656 preload.go:172] Found /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0816 10:23:50.508183    4656 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 10:23:50.508305    4656 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:23:50.509238    4656 start.go:360] acquireMachinesLock for ha-286000-m02: {Name:mk0510f0432b1e24fd07d899155e85dff8756d5a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:23:50.509340    4656 start.go:364] duration metric: took 77.349µs to acquireMachinesLock for "ha-286000-m02"
	I0816 10:23:50.509364    4656 start.go:96] Skipping create...Using existing machine configuration
	I0816 10:23:50.509373    4656 fix.go:54] fixHost starting: m02
	I0816 10:23:50.509785    4656 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:23:50.509813    4656 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:23:50.519278    4656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52185
	I0816 10:23:50.519808    4656 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:23:50.520224    4656 main.go:141] libmachine: Using API Version  1
	I0816 10:23:50.520241    4656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:23:50.520527    4656 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:23:50.520742    4656 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:23:50.520847    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetState
	I0816 10:23:50.520930    4656 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:23:50.521027    4656 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid from json: 4408
	I0816 10:23:50.521973    4656 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid 4408 missing from process table
	I0816 10:23:50.522001    4656 fix.go:112] recreateIfNeeded on ha-286000-m02: state=Stopped err=<nil>
	I0816 10:23:50.522008    4656 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	W0816 10:23:50.522113    4656 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 10:23:50.564905    4656 out.go:177] * Restarting existing hyperkit VM for "ha-286000-m02" ...
	I0816 10:23:50.585936    4656 main.go:141] libmachine: (ha-286000-m02) Calling .Start
	I0816 10:23:50.586207    4656 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:23:50.586317    4656 main.go:141] libmachine: (ha-286000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/hyperkit.pid
	I0816 10:23:50.588008    4656 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid 4408 missing from process table
	I0816 10:23:50.588025    4656 main.go:141] libmachine: (ha-286000-m02) DBG | pid 4408 is in state "Stopped"
	I0816 10:23:50.588043    4656 main.go:141] libmachine: (ha-286000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/hyperkit.pid...
	I0816 10:23:50.588412    4656 main.go:141] libmachine: (ha-286000-m02) DBG | Using UUID f7630301-2bc3-4935-a751-ac72999da031
	I0816 10:23:50.615912    4656 main.go:141] libmachine: (ha-286000-m02) DBG | Generated MAC 72:69:8f:11:68:1d
	I0816 10:23:50.615934    4656 main.go:141] libmachine: (ha-286000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000
	I0816 10:23:50.616061    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:50 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"f7630301-2bc3-4935-a751-ac72999da031", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003aca20)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 10:23:50.616091    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:50 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"f7630301-2bc3-4935-a751-ac72999da031", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003aca20)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 10:23:50.616153    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:50 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "f7630301-2bc3-4935-a751-ac72999da031", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/ha-286000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/bzimage,/Users/jenkins/minikube-integration/19461-1276/.minikube/machine
s/ha-286000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000"}
	I0816 10:23:50.616186    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:50 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U f7630301-2bc3-4935-a751-ac72999da031 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/ha-286000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/bzimage,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000"
	I0816 10:23:50.616197    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:50 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0816 10:23:50.617617    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:50 DEBUG: hyperkit: Pid is 4678
	I0816 10:23:50.618129    4656 main.go:141] libmachine: (ha-286000-m02) DBG | Attempt 0
	I0816 10:23:50.618145    4656 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:23:50.618226    4656 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid from json: 4678
	I0816 10:23:50.620253    4656 main.go:141] libmachine: (ha-286000-m02) DBG | Searching for 72:69:8f:11:68:1d in /var/db/dhcpd_leases ...
	I0816 10:23:50.620318    4656 main.go:141] libmachine: (ha-286000-m02) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0816 10:23:50.620334    4656 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:23:50.620349    4656 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dbd7}
	I0816 10:23:50.620388    4656 main.go:141] libmachine: (ha-286000-m02) DBG | Found match: 72:69:8f:11:68:1d
	I0816 10:23:50.620402    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetConfigRaw
	I0816 10:23:50.620404    4656 main.go:141] libmachine: (ha-286000-m02) DBG | IP: 192.169.0.6
	I0816 10:23:50.621061    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetIP
	I0816 10:23:50.621271    4656 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:23:50.621639    4656 machine.go:93] provisionDockerMachine start ...
	I0816 10:23:50.621648    4656 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:23:50.621787    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:23:50.621898    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:23:50.622018    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:23:50.622130    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:23:50.622215    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:23:50.622373    4656 main.go:141] libmachine: Using SSH client type: native
	I0816 10:23:50.622508    4656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e87ea0] 0x3e8ac00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:23:50.622515    4656 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 10:23:50.625610    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:50 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0816 10:23:50.635240    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:50 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0816 10:23:50.636222    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:23:50.636239    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:23:50.636256    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:23:50.636268    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:23:51.016978    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:51 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0816 10:23:51.016996    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:51 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0816 10:23:51.131867    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:51 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:23:51.131882    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:51 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:23:51.131905    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:51 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:23:51.131915    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:51 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:23:51.132722    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:51 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0816 10:23:51.132732    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:51 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0816 10:23:56.691144    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:56 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0816 10:23:56.691211    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:56 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0816 10:23:56.691221    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:56 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0816 10:23:56.715157    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:56 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0816 10:24:01.691628    4656 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 10:24:01.691659    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetMachineName
	I0816 10:24:01.691824    4656 buildroot.go:166] provisioning hostname "ha-286000-m02"
	I0816 10:24:01.691835    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetMachineName
	I0816 10:24:01.691933    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:24:01.692024    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:24:01.692118    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:24:01.692216    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:24:01.692322    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:24:01.692468    4656 main.go:141] libmachine: Using SSH client type: native
	I0816 10:24:01.692634    4656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e87ea0] 0x3e8ac00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:24:01.692662    4656 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-286000-m02 && echo "ha-286000-m02" | sudo tee /etc/hostname
	I0816 10:24:01.771215    4656 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-286000-m02
	
	I0816 10:24:01.771228    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:24:01.771358    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:24:01.771450    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:24:01.771545    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:24:01.771647    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:24:01.771778    4656 main.go:141] libmachine: Using SSH client type: native
	I0816 10:24:01.771942    4656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e87ea0] 0x3e8ac00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:24:01.771954    4656 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-286000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-286000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-286000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 10:24:01.843105    4656 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 10:24:01.843122    4656 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19461-1276/.minikube CaCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19461-1276/.minikube}
	I0816 10:24:01.843132    4656 buildroot.go:174] setting up certificates
	I0816 10:24:01.843138    4656 provision.go:84] configureAuth start
	I0816 10:24:01.843144    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetMachineName
	I0816 10:24:01.843278    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetIP
	I0816 10:24:01.843379    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:24:01.843473    4656 provision.go:143] copyHostCerts
	I0816 10:24:01.843506    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem
	I0816 10:24:01.843559    4656 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem, removing ...
	I0816 10:24:01.843565    4656 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem
	I0816 10:24:01.843699    4656 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem (1078 bytes)
	I0816 10:24:01.843904    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem
	I0816 10:24:01.843934    4656 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem, removing ...
	I0816 10:24:01.843938    4656 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem
	I0816 10:24:01.844006    4656 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem (1123 bytes)
	I0816 10:24:01.844155    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem
	I0816 10:24:01.844183    4656 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem, removing ...
	I0816 10:24:01.844188    4656 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem
	I0816 10:24:01.844260    4656 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem (1679 bytes)
	I0816 10:24:01.844439    4656 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem org=jenkins.ha-286000-m02 san=[127.0.0.1 192.169.0.6 ha-286000-m02 localhost minikube]
	I0816 10:24:02.337393    4656 provision.go:177] copyRemoteCerts
	I0816 10:24:02.337441    4656 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 10:24:02.337455    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:24:02.337604    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:24:02.337706    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:24:02.337804    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:24:02.337897    4656 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/id_rsa Username:docker}
	I0816 10:24:02.378639    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0816 10:24:02.378714    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 10:24:02.398417    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0816 10:24:02.398480    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0816 10:24:02.418213    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0816 10:24:02.418277    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0816 10:24:02.438096    4656 provision.go:87] duration metric: took 595.044673ms to configureAuth
	I0816 10:24:02.438110    4656 buildroot.go:189] setting minikube options for container-runtime
	I0816 10:24:02.438277    4656 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:24:02.438294    4656 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:24:02.438430    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:24:02.438542    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:24:02.438634    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:24:02.438711    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:24:02.438803    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:24:02.438923    4656 main.go:141] libmachine: Using SSH client type: native
	I0816 10:24:02.439049    4656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e87ea0] 0x3e8ac00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:24:02.439057    4656 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0816 10:24:02.506619    4656 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0816 10:24:02.506630    4656 buildroot.go:70] root file system type: tmpfs
	I0816 10:24:02.506699    4656 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0816 10:24:02.506717    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:24:02.506855    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:24:02.506952    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:24:02.507065    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:24:02.507163    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:24:02.507316    4656 main.go:141] libmachine: Using SSH client type: native
	I0816 10:24:02.507497    4656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e87ea0] 0x3e8ac00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:24:02.507542    4656 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0816 10:24:02.585569    4656 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0816 10:24:02.585592    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:24:02.585731    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:24:02.585811    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:24:02.585904    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:24:02.585995    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:24:02.586114    4656 main.go:141] libmachine: Using SSH client type: native
	I0816 10:24:02.586256    4656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e87ea0] 0x3e8ac00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:24:02.586268    4656 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0816 10:24:04.282251    4656 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0816 10:24:04.282267    4656 machine.go:96] duration metric: took 13.663433605s to provisionDockerMachine
	I0816 10:24:04.282274    4656 start.go:293] postStartSetup for "ha-286000-m02" (driver="hyperkit")
	I0816 10:24:04.282282    4656 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 10:24:04.282291    4656 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:24:04.282476    4656 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 10:24:04.282490    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:24:04.282590    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:24:04.282676    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:24:04.282759    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:24:04.282862    4656 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/id_rsa Username:docker}
	I0816 10:24:04.323177    4656 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 10:24:04.326227    4656 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 10:24:04.326238    4656 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19461-1276/.minikube/addons for local assets ...
	I0816 10:24:04.326327    4656 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19461-1276/.minikube/files for local assets ...
	I0816 10:24:04.326475    4656 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> 18312.pem in /etc/ssl/certs
	I0816 10:24:04.326481    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> /etc/ssl/certs/18312.pem
	I0816 10:24:04.326635    4656 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 10:24:04.333923    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem --> /etc/ssl/certs/18312.pem (1708 bytes)
	I0816 10:24:04.354007    4656 start.go:296] duration metric: took 71.735624ms for postStartSetup
	I0816 10:24:04.354029    4656 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:24:04.354205    4656 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0816 10:24:04.354219    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:24:04.354303    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:24:04.354400    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:24:04.354484    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:24:04.354570    4656 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/id_rsa Username:docker}
	I0816 10:24:04.394664    4656 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0816 10:24:04.394719    4656 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0816 10:24:04.426272    4656 fix.go:56] duration metric: took 13.919762029s for fixHost
	I0816 10:24:04.426298    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:24:04.426444    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:24:04.426552    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:24:04.426653    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:24:04.426754    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:24:04.426882    4656 main.go:141] libmachine: Using SSH client type: native
	I0816 10:24:04.427028    4656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e87ea0] 0x3e8ac00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:24:04.427036    4656 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 10:24:04.493811    4656 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723829044.518955224
	
	I0816 10:24:04.493822    4656 fix.go:216] guest clock: 1723829044.518955224
	I0816 10:24:04.493832    4656 fix.go:229] Guest: 2024-08-16 10:24:04.518955224 -0700 PDT Remote: 2024-08-16 10:24:04.426286 -0700 PDT m=+33.045019463 (delta=92.669224ms)
	I0816 10:24:04.493843    4656 fix.go:200] guest clock delta is within tolerance: 92.669224ms
	I0816 10:24:04.493847    4656 start.go:83] releasing machines lock for "ha-286000-m02", held for 13.987372778s
	I0816 10:24:04.493864    4656 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:24:04.494002    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetIP
	I0816 10:24:04.518312    4656 out.go:177] * Found network options:
	I0816 10:24:04.540563    4656 out.go:177]   - NO_PROXY=192.169.0.5
	W0816 10:24:04.562476    4656 proxy.go:119] fail to check proxy env: Error ip not in block
	I0816 10:24:04.562514    4656 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:24:04.563369    4656 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:24:04.563631    4656 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:24:04.563760    4656 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 10:24:04.563821    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	W0816 10:24:04.563878    4656 proxy.go:119] fail to check proxy env: Error ip not in block
	I0816 10:24:04.563978    4656 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0816 10:24:04.563994    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:24:04.563998    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:24:04.564194    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:24:04.564230    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:24:04.564370    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:24:04.564412    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:24:04.564603    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:24:04.564677    4656 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/id_rsa Username:docker}
	I0816 10:24:04.564735    4656 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/id_rsa Username:docker}
	W0816 10:24:04.601353    4656 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 10:24:04.601410    4656 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 10:24:04.653940    4656 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 10:24:04.653960    4656 start.go:495] detecting cgroup driver to use...
	I0816 10:24:04.654084    4656 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 10:24:04.669702    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0816 10:24:04.678676    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0816 10:24:04.687652    4656 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0816 10:24:04.687695    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0816 10:24:04.696611    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 10:24:04.705567    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0816 10:24:04.714412    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 10:24:04.723256    4656 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 10:24:04.732202    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0816 10:24:04.746674    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0816 10:24:04.757904    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0816 10:24:04.767905    4656 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 10:24:04.779013    4656 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 10:24:04.790474    4656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:24:04.892919    4656 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0816 10:24:04.911874    4656 start.go:495] detecting cgroup driver to use...
	I0816 10:24:04.911946    4656 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0816 10:24:04.929416    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 10:24:04.941191    4656 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 10:24:04.954835    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 10:24:04.965605    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 10:24:04.976040    4656 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0816 10:24:05.001090    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 10:24:05.011999    4656 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 10:24:05.026893    4656 ssh_runner.go:195] Run: which cri-dockerd
	I0816 10:24:05.029920    4656 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0816 10:24:05.037094    4656 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0816 10:24:05.050742    4656 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0816 10:24:05.142175    4656 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0816 10:24:05.247816    4656 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0816 10:24:05.247843    4656 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0816 10:24:05.261875    4656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:24:05.354182    4656 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0816 10:24:07.691138    4656 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.337231062s)
	I0816 10:24:07.691198    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0816 10:24:07.701875    4656 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0816 10:24:07.715113    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0816 10:24:07.725351    4656 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0816 10:24:07.820462    4656 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0816 10:24:07.932462    4656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:24:08.044265    4656 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0816 10:24:08.057914    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0816 10:24:08.069171    4656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:24:08.165855    4656 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0816 10:24:08.229743    4656 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0816 10:24:08.229822    4656 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0816 10:24:08.234625    4656 start.go:563] Will wait 60s for crictl version
	I0816 10:24:08.234677    4656 ssh_runner.go:195] Run: which crictl
	I0816 10:24:08.237852    4656 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 10:24:08.262491    4656 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0816 10:24:08.262569    4656 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0816 10:24:08.282005    4656 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0816 10:24:08.324107    4656 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0816 10:24:08.365750    4656 out.go:177]   - env NO_PROXY=192.169.0.5
	I0816 10:24:08.386602    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetIP
	I0816 10:24:08.387035    4656 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0816 10:24:08.391617    4656 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 10:24:08.401981    4656 mustload.go:65] Loading cluster: ha-286000
	I0816 10:24:08.402159    4656 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:24:08.402381    4656 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:24:08.402414    4656 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:24:08.411266    4656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52207
	I0816 10:24:08.411600    4656 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:24:08.411912    4656 main.go:141] libmachine: Using API Version  1
	I0816 10:24:08.411923    4656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:24:08.412158    4656 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:24:08.412273    4656 main.go:141] libmachine: (ha-286000) Calling .GetState
	I0816 10:24:08.412350    4656 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:24:08.412439    4656 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 4669
	I0816 10:24:08.413371    4656 host.go:66] Checking if "ha-286000" exists ...
	I0816 10:24:08.413648    4656 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:24:08.413671    4656 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:24:08.422352    4656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52209
	I0816 10:24:08.422710    4656 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:24:08.423035    4656 main.go:141] libmachine: Using API Version  1
	I0816 10:24:08.423046    4656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:24:08.423253    4656 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:24:08.423365    4656 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:24:08.423454    4656 certs.go:68] Setting up /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000 for IP: 192.169.0.6
	I0816 10:24:08.423460    4656 certs.go:194] generating shared ca certs ...
	I0816 10:24:08.423469    4656 certs.go:226] acquiring lock for ca certs: {Name:mka549fbc467849834d2267da204028c49d88a3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:24:08.423616    4656 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key
	I0816 10:24:08.423685    4656 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key
	I0816 10:24:08.423693    4656 certs.go:256] generating profile certs ...
	I0816 10:24:08.423785    4656 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.key
	I0816 10:24:08.423872    4656 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.df014ba6
	I0816 10:24:08.423924    4656 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key
	I0816 10:24:08.423931    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0816 10:24:08.423952    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0816 10:24:08.423978    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0816 10:24:08.423996    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0816 10:24:08.424013    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0816 10:24:08.424031    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0816 10:24:08.424049    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0816 10:24:08.424065    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0816 10:24:08.424139    4656 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem (1338 bytes)
	W0816 10:24:08.424181    4656 certs.go:480] ignoring /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831_empty.pem, impossibly tiny 0 bytes
	I0816 10:24:08.424189    4656 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 10:24:08.424243    4656 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem (1078 bytes)
	I0816 10:24:08.424278    4656 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem (1123 bytes)
	I0816 10:24:08.424308    4656 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem (1679 bytes)
	I0816 10:24:08.424377    4656 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem (1708 bytes)
	I0816 10:24:08.424414    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem -> /usr/share/ca-certificates/1831.pem
	I0816 10:24:08.424439    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> /usr/share/ca-certificates/18312.pem
	I0816 10:24:08.424464    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:24:08.424490    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:24:08.424585    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:24:08.424670    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:24:08.424754    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:24:08.424829    4656 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:24:08.455631    4656 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0816 10:24:08.459165    4656 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0816 10:24:08.467170    4656 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0816 10:24:08.470222    4656 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0816 10:24:08.478239    4656 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0816 10:24:08.481358    4656 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0816 10:24:08.489236    4656 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0816 10:24:08.492402    4656 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0816 10:24:08.500317    4656 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0816 10:24:08.503508    4656 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0816 10:24:08.511673    4656 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0816 10:24:08.514769    4656 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0816 10:24:08.522766    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 10:24:08.542887    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0816 10:24:08.562071    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 10:24:08.581743    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 10:24:08.600945    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0816 10:24:08.620933    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 10:24:08.640254    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 10:24:08.659444    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 10:24:08.678715    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem --> /usr/share/ca-certificates/1831.pem (1338 bytes)
	I0816 10:24:08.697527    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem --> /usr/share/ca-certificates/18312.pem (1708 bytes)
	I0816 10:24:08.716988    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 10:24:08.735913    4656 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0816 10:24:08.749507    4656 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0816 10:24:08.763125    4656 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0816 10:24:08.776902    4656 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0816 10:24:08.790611    4656 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0816 10:24:08.804538    4656 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0816 10:24:08.817970    4656 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0816 10:24:08.831472    4656 ssh_runner.go:195] Run: openssl version
	I0816 10:24:08.835773    4656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18312.pem && ln -fs /usr/share/ca-certificates/18312.pem /etc/ssl/certs/18312.pem"
	I0816 10:24:08.845139    4656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18312.pem
	I0816 10:24:08.848508    4656 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 16:57 /usr/share/ca-certificates/18312.pem
	I0816 10:24:08.848545    4656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18312.pem
	I0816 10:24:08.852837    4656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18312.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 10:24:08.861881    4656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 10:24:08.870959    4656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:24:08.874362    4656 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 16:48 /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:24:08.874393    4656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:24:08.878676    4656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 10:24:08.887721    4656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1831.pem && ln -fs /usr/share/ca-certificates/1831.pem /etc/ssl/certs/1831.pem"
	I0816 10:24:08.896767    4656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1831.pem
	I0816 10:24:08.900184    4656 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 16:57 /usr/share/ca-certificates/1831.pem
	I0816 10:24:08.900218    4656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1831.pem
	I0816 10:24:08.904590    4656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1831.pem /etc/ssl/certs/51391683.0"
	I0816 10:24:08.913817    4656 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 10:24:08.917320    4656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 10:24:08.921592    4656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 10:24:08.925840    4656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 10:24:08.930232    4656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 10:24:08.934401    4656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 10:24:08.938749    4656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 10:24:08.943061    4656 kubeadm.go:934] updating node {m02 192.169.0.6 8443 v1.31.0 docker true true} ...
	I0816 10:24:08.943117    4656 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-286000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-286000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 10:24:08.943138    4656 kube-vip.go:115] generating kube-vip config ...
	I0816 10:24:08.943173    4656 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0816 10:24:08.956099    4656 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0816 10:24:08.956137    4656 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0816 10:24:08.956187    4656 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 10:24:08.964732    4656 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 10:24:08.964780    4656 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0816 10:24:08.972962    4656 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0816 10:24:08.986351    4656 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 10:24:08.999555    4656 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0816 10:24:09.013514    4656 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0816 10:24:09.016494    4656 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 10:24:09.026607    4656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:24:09.119324    4656 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 10:24:09.134140    4656 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:24:09.134339    4656 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:24:09.155614    4656 out.go:177] * Verifying Kubernetes components...
	I0816 10:24:09.197468    4656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:24:09.303306    4656 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 10:24:09.318292    4656 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19461-1276/kubeconfig
	I0816 10:24:09.318481    4656 kapi.go:59] client config for ha-286000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.key", CAFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x5540f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0816 10:24:09.318519    4656 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0816 10:24:09.318689    4656 node_ready.go:35] waiting up to 6m0s for node "ha-286000-m02" to be "Ready" ...
	I0816 10:24:09.318767    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:24:09.318772    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:09.318780    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:09.318783    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:18.478519    4656 round_trippers.go:574] Response Status: 200 OK in 9160 milliseconds
	I0816 10:24:18.479788    4656 node_ready.go:49] node "ha-286000-m02" has status "Ready":"True"
	I0816 10:24:18.479801    4656 node_ready.go:38] duration metric: took 9.161930596s for node "ha-286000-m02" to be "Ready" ...
	I0816 10:24:18.479809    4656 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 10:24:18.479841    4656 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0816 10:24:18.479849    4656 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0816 10:24:18.479888    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0816 10:24:18.479893    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:18.479899    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:18.479903    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:18.524673    4656 round_trippers.go:574] Response Status: 200 OK in 44 milliseconds
	I0816 10:24:18.529733    4656 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-2kqjf" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:18.529785    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-2kqjf
	I0816 10:24:18.529790    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:18.529807    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:18.529813    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:18.533009    4656 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:24:18.533408    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:24:18.533415    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:18.533421    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:18.533425    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:18.536536    4656 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:24:18.536873    4656 pod_ready.go:93] pod "coredns-6f6b679f8f-2kqjf" in "kube-system" namespace has status "Ready":"True"
	I0816 10:24:18.536881    4656 pod_ready.go:82] duration metric: took 7.13625ms for pod "coredns-6f6b679f8f-2kqjf" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:18.536890    4656 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-rfbz7" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:18.536923    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-rfbz7
	I0816 10:24:18.536928    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:18.536933    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:18.536936    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:18.538881    4656 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:24:18.539268    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:24:18.539275    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:18.539280    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:18.539283    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:18.541207    4656 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:24:18.541586    4656 pod_ready.go:93] pod "coredns-6f6b679f8f-rfbz7" in "kube-system" namespace has status "Ready":"True"
	I0816 10:24:18.541594    4656 pod_ready.go:82] duration metric: took 4.698747ms for pod "coredns-6f6b679f8f-rfbz7" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:18.541600    4656 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:18.541631    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-286000
	I0816 10:24:18.541636    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:18.541641    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:18.541646    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:18.543814    4656 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:24:18.544226    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:24:18.544232    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:18.544238    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:18.544241    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:18.546294    4656 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:24:18.546667    4656 pod_ready.go:93] pod "etcd-ha-286000" in "kube-system" namespace has status "Ready":"True"
	I0816 10:24:18.546676    4656 pod_ready.go:82] duration metric: took 5.071416ms for pod "etcd-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:18.546683    4656 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:18.546714    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-286000-m02
	I0816 10:24:18.546719    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:18.546724    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:18.546727    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:18.548810    4656 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:24:18.549180    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:24:18.549187    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:18.549193    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:18.549196    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:18.551164    4656 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:24:18.551594    4656 pod_ready.go:93] pod "etcd-ha-286000-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 10:24:18.551602    4656 pod_ready.go:82] duration metric: took 4.914791ms for pod "etcd-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:18.551612    4656 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:18.551646    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-286000
	I0816 10:24:18.551651    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:18.551657    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:18.551661    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:18.553736    4656 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:24:18.680501    4656 request.go:632] Waited for 126.254478ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:24:18.680609    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:24:18.680620    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:18.680631    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:18.680639    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:18.684350    4656 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:24:18.684850    4656 pod_ready.go:93] pod "kube-apiserver-ha-286000" in "kube-system" namespace has status "Ready":"True"
	I0816 10:24:18.684859    4656 pod_ready.go:82] duration metric: took 133.250923ms for pod "kube-apiserver-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:18.684865    4656 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:18.880626    4656 request.go:632] Waited for 195.713304ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-286000-m02
	I0816 10:24:18.880742    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-286000-m02
	I0816 10:24:18.880753    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:18.880765    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:18.880778    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:18.884447    4656 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:24:19.081261    4656 request.go:632] Waited for 196.182218ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:24:19.081349    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:24:19.081358    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:19.081368    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:19.081377    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:19.085528    4656 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 10:24:19.085961    4656 pod_ready.go:93] pod "kube-apiserver-ha-286000-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 10:24:19.085970    4656 pod_ready.go:82] duration metric: took 401.129633ms for pod "kube-apiserver-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:19.085977    4656 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:19.279954    4656 request.go:632] Waited for 193.926578ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-286000
	I0816 10:24:19.279986    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-286000
	I0816 10:24:19.279991    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:19.279997    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:19.280003    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:19.283105    4656 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:24:19.480663    4656 request.go:632] Waited for 196.83909ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:24:19.480698    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:24:19.480704    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:19.480710    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:19.480728    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:19.483828    4656 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:24:19.484258    4656 pod_ready.go:93] pod "kube-controller-manager-ha-286000" in "kube-system" namespace has status "Ready":"True"
	I0816 10:24:19.484269    4656 pod_ready.go:82] duration metric: took 398.316107ms for pod "kube-controller-manager-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:19.484276    4656 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:19.681917    4656 request.go:632] Waited for 197.597037ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-286000-m02
	I0816 10:24:19.682075    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-286000-m02
	I0816 10:24:19.682091    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:19.682103    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:19.682113    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:19.686127    4656 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 10:24:19.880667    4656 request.go:632] Waited for 193.865313ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:24:19.880730    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:24:19.880736    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:19.880742    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:19.880750    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:19.884780    4656 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 10:24:19.885298    4656 pod_ready.go:93] pod "kube-controller-manager-ha-286000-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 10:24:19.885308    4656 pod_ready.go:82] duration metric: took 401.055356ms for pod "kube-controller-manager-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:19.885315    4656 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5qhgk" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:20.081205    4656 request.go:632] Waited for 195.805147ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5qhgk
	I0816 10:24:20.081294    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5qhgk
	I0816 10:24:20.081304    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:20.081316    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:20.081321    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:20.085631    4656 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 10:24:20.280455    4656 request.go:632] Waited for 194.474574ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000-m04
	I0816 10:24:20.280532    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m04
	I0816 10:24:20.280539    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:20.280547    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:20.280552    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:20.287097    4656 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0816 10:24:20.287492    4656 pod_ready.go:93] pod "kube-proxy-5qhgk" in "kube-system" namespace has status "Ready":"True"
	I0816 10:24:20.287501    4656 pod_ready.go:82] duration metric: took 402.209883ms for pod "kube-proxy-5qhgk" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:20.287508    4656 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pt669" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:20.480572    4656 request.go:632] Waited for 193.037822ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pt669
	I0816 10:24:20.480648    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pt669
	I0816 10:24:20.480652    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:20.480659    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:20.480663    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:20.483171    4656 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:24:20.681664    4656 request.go:632] Waited for 198.111953ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:24:20.681763    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:24:20.681771    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:20.681779    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:20.681784    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:20.684372    4656 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:24:20.684693    4656 pod_ready.go:93] pod "kube-proxy-pt669" in "kube-system" namespace has status "Ready":"True"
	I0816 10:24:20.684702    4656 pod_ready.go:82] duration metric: took 397.216841ms for pod "kube-proxy-pt669" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:20.684712    4656 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-w4nt2" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:20.879782    4656 request.go:632] Waited for 195.039009ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w4nt2
	I0816 10:24:20.879907    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w4nt2
	I0816 10:24:20.879921    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:20.879933    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:20.879941    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:20.883394    4656 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:24:21.079930    4656 request.go:632] Waited for 195.888686ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:24:21.080028    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:24:21.080039    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:21.080050    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:21.080059    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:21.083488    4656 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:24:21.083893    4656 pod_ready.go:93] pod "kube-proxy-w4nt2" in "kube-system" namespace has status "Ready":"True"
	I0816 10:24:21.083903    4656 pod_ready.go:82] duration metric: took 399.212461ms for pod "kube-proxy-w4nt2" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:21.083911    4656 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:21.281558    4656 request.go:632] Waited for 197.607208ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-286000
	I0816 10:24:21.281628    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-286000
	I0816 10:24:21.281639    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:21.281648    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:21.281654    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:21.284223    4656 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:24:21.480419    4656 request.go:632] Waited for 195.838756ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:24:21.480514    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:24:21.480525    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:21.480537    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:21.480544    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:21.483887    4656 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:24:21.484430    4656 pod_ready.go:93] pod "kube-scheduler-ha-286000" in "kube-system" namespace has status "Ready":"True"
	I0816 10:24:21.484439    4656 pod_ready.go:82] duration metric: took 400.549346ms for pod "kube-scheduler-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:21.484446    4656 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:21.679727    4656 request.go:632] Waited for 195.252345ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-286000-m02
	I0816 10:24:21.679760    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-286000-m02
	I0816 10:24:21.679765    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:21.679769    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:21.679805    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:21.686476    4656 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0816 10:24:21.880162    4656 request.go:632] Waited for 193.203193ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:24:21.880221    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:24:21.880231    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:21.880247    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:21.880256    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:21.884015    4656 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:24:21.884602    4656 pod_ready.go:93] pod "kube-scheduler-ha-286000-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 10:24:21.884611    4656 pod_ready.go:82] duration metric: took 400.186514ms for pod "kube-scheduler-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:21.884619    4656 pod_ready.go:39] duration metric: took 3.405043457s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 10:24:21.884636    4656 api_server.go:52] waiting for apiserver process to appear ...
	I0816 10:24:21.884692    4656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 10:24:21.896175    4656 api_server.go:72] duration metric: took 12.763101701s to wait for apiserver process to appear ...
	I0816 10:24:21.896187    4656 api_server.go:88] waiting for apiserver healthz status ...
	I0816 10:24:21.896203    4656 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0816 10:24:21.900677    4656 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0816 10:24:21.900711    4656 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0816 10:24:21.900715    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:21.900720    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:21.900725    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:21.901496    4656 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0816 10:24:21.901599    4656 api_server.go:141] control plane version: v1.31.0
	I0816 10:24:21.901609    4656 api_server.go:131] duration metric: took 5.41777ms to wait for apiserver health ...
	I0816 10:24:21.901617    4656 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 10:24:22.081425    4656 request.go:632] Waited for 179.775499ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0816 10:24:22.081509    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0816 10:24:22.081521    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:22.081533    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:22.081542    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:22.087308    4656 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0816 10:24:22.090908    4656 system_pods.go:59] 19 kube-system pods found
	I0816 10:24:22.090924    4656 system_pods.go:61] "coredns-6f6b679f8f-2kqjf" [be18cd09-3e3b-4749-bf29-7001a879f593] Running
	I0816 10:24:22.090929    4656 system_pods.go:61] "coredns-6f6b679f8f-rfbz7" [a593e27a-e38b-46c7-a603-44963c31c095] Running
	I0816 10:24:22.090932    4656 system_pods.go:61] "etcd-ha-286000" [c45bc623-befe-4e88-8da1-bb4f7d7be5b2] Running
	I0816 10:24:22.090935    4656 system_pods.go:61] "etcd-ha-286000-m02" [e14fbe0c-4527-4cbb-8c13-01c0b32a8d15] Running
	I0816 10:24:22.090938    4656 system_pods.go:61] "kindnet-b9r6s" [07db3ed9-4355-45a2-be01-15273d773c65] Running
	I0816 10:24:22.090940    4656 system_pods.go:61] "kindnet-t9kjf" [20c57f28-5e86-44a4-8a2a-07e1e240abf2] Running
	I0816 10:24:22.090943    4656 system_pods.go:61] "kindnet-whqxb" [1cc5291b-52ea-44b5-b87c-607d46b5281a] Running
	I0816 10:24:22.090946    4656 system_pods.go:61] "kube-apiserver-ha-286000" [c0d298a9-931b-4084-9e5f-01a88de27456] Running
	I0816 10:24:22.090949    4656 system_pods.go:61] "kube-apiserver-ha-286000-m02" [3666c131-2b88-483a-ab8c-b18cb8db7d6f] Running
	I0816 10:24:22.090952    4656 system_pods.go:61] "kube-controller-manager-ha-286000" [5a4f4c5d-d89a-4e5f-b022-479ca31f64cd] Running
	I0816 10:24:22.090954    4656 system_pods.go:61] "kube-controller-manager-ha-286000-m02" [a347c3d4-fa47-49ea-93a0-83087017a2bd] Running
	I0816 10:24:22.090957    4656 system_pods.go:61] "kube-proxy-5qhgk" [da0cb383-9e0b-44cc-9f79-7861029514f7] Running
	I0816 10:24:22.090959    4656 system_pods.go:61] "kube-proxy-pt669" [798c956f-8f08-465a-b0d9-90b65f703f80] Running
	I0816 10:24:22.090962    4656 system_pods.go:61] "kube-proxy-w4nt2" [79ce2248-f8fd-4a3b-aeb7-f81d7ad64564] Running
	I0816 10:24:22.090967    4656 system_pods.go:61] "kube-scheduler-ha-286000" [b4af80e0-cbb0-4257-a212-3585faff0875] Running
	I0816 10:24:22.090971    4656 system_pods.go:61] "kube-scheduler-ha-286000-m02" [89920e93-c959-47ec-8a2c-f2b63e8c3d3a] Running
	I0816 10:24:22.090973    4656 system_pods.go:61] "kube-vip-ha-286000" [b0f85be2-2afb-44b0-a701-161b24529349] Running
	I0816 10:24:22.090976    4656 system_pods.go:61] "kube-vip-ha-286000-m02" [4982dc53-946d-4f75-8e9e-452ef39ff2fa] Running
	I0816 10:24:22.090978    4656 system_pods.go:61] "storage-provisioner" [4805d53b-2db3-4092-a3f2-d4a854e93adc] Running
	I0816 10:24:22.090983    4656 system_pods.go:74] duration metric: took 189.374292ms to wait for pod list to return data ...
	I0816 10:24:22.090989    4656 default_sa.go:34] waiting for default service account to be created ...
	I0816 10:24:22.280932    4656 request.go:632] Waited for 189.91131ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0816 10:24:22.280992    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0816 10:24:22.280998    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:22.281004    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:22.281007    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:22.286126    4656 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0816 10:24:22.286303    4656 default_sa.go:45] found service account: "default"
	I0816 10:24:22.286313    4656 default_sa.go:55] duration metric: took 195.332329ms for default service account to be created ...
	I0816 10:24:22.286320    4656 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 10:24:22.480087    4656 request.go:632] Waited for 193.706904ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0816 10:24:22.480149    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0816 10:24:22.480160    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:22.480172    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:22.480181    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:22.486391    4656 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0816 10:24:22.490416    4656 system_pods.go:86] 19 kube-system pods found
	I0816 10:24:22.490428    4656 system_pods.go:89] "coredns-6f6b679f8f-2kqjf" [be18cd09-3e3b-4749-bf29-7001a879f593] Running
	I0816 10:24:22.490432    4656 system_pods.go:89] "coredns-6f6b679f8f-rfbz7" [a593e27a-e38b-46c7-a603-44963c31c095] Running
	I0816 10:24:22.490435    4656 system_pods.go:89] "etcd-ha-286000" [c45bc623-befe-4e88-8da1-bb4f7d7be5b2] Running
	I0816 10:24:22.490443    4656 system_pods.go:89] "etcd-ha-286000-m02" [e14fbe0c-4527-4cbb-8c13-01c0b32a8d15] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0816 10:24:22.490447    4656 system_pods.go:89] "kindnet-b9r6s" [07db3ed9-4355-45a2-be01-15273d773c65] Running
	I0816 10:24:22.490454    4656 system_pods.go:89] "kindnet-t9kjf" [20c57f28-5e86-44a4-8a2a-07e1e240abf2] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0816 10:24:22.490458    4656 system_pods.go:89] "kindnet-whqxb" [1cc5291b-52ea-44b5-b87c-607d46b5281a] Running
	I0816 10:24:22.490462    4656 system_pods.go:89] "kube-apiserver-ha-286000" [c0d298a9-931b-4084-9e5f-01a88de27456] Running
	I0816 10:24:22.490466    4656 system_pods.go:89] "kube-apiserver-ha-286000-m02" [3666c131-2b88-483a-ab8c-b18cb8db7d6f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0816 10:24:22.490469    4656 system_pods.go:89] "kube-controller-manager-ha-286000" [5a4f4c5d-d89a-4e5f-b022-479ca31f64cd] Running
	I0816 10:24:22.490478    4656 system_pods.go:89] "kube-controller-manager-ha-286000-m02" [a347c3d4-fa47-49ea-93a0-83087017a2bd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0816 10:24:22.490483    4656 system_pods.go:89] "kube-proxy-5qhgk" [da0cb383-9e0b-44cc-9f79-7861029514f7] Running
	I0816 10:24:22.490487    4656 system_pods.go:89] "kube-proxy-pt669" [798c956f-8f08-465a-b0d9-90b65f703f80] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0816 10:24:22.490496    4656 system_pods.go:89] "kube-proxy-w4nt2" [79ce2248-f8fd-4a3b-aeb7-f81d7ad64564] Running
	I0816 10:24:22.490499    4656 system_pods.go:89] "kube-scheduler-ha-286000" [b4af80e0-cbb0-4257-a212-3585faff0875] Running
	I0816 10:24:22.490503    4656 system_pods.go:89] "kube-scheduler-ha-286000-m02" [89920e93-c959-47ec-8a2c-f2b63e8c3d3a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 10:24:22.490507    4656 system_pods.go:89] "kube-vip-ha-286000" [b0f85be2-2afb-44b0-a701-161b24529349] Running
	I0816 10:24:22.490511    4656 system_pods.go:89] "kube-vip-ha-286000-m02" [4982dc53-946d-4f75-8e9e-452ef39ff2fa] Running
	I0816 10:24:22.490514    4656 system_pods.go:89] "storage-provisioner" [4805d53b-2db3-4092-a3f2-d4a854e93adc] Running
	I0816 10:24:22.490518    4656 system_pods.go:126] duration metric: took 204.207739ms to wait for k8s-apps to be running ...
	I0816 10:24:22.490523    4656 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 10:24:22.490574    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 10:24:22.501971    4656 system_svc.go:56] duration metric: took 11.445041ms WaitForService to wait for kubelet
	I0816 10:24:22.501986    4656 kubeadm.go:582] duration metric: took 13.368953512s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 10:24:22.501997    4656 node_conditions.go:102] verifying NodePressure condition ...
	I0816 10:24:22.681633    4656 request.go:632] Waited for 179.608953ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0816 10:24:22.681696    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0816 10:24:22.681702    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:22.681708    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:22.681744    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:22.684771    4656 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:24:22.685508    4656 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 10:24:22.685523    4656 node_conditions.go:123] node cpu capacity is 2
	I0816 10:24:22.685532    4656 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 10:24:22.685535    4656 node_conditions.go:123] node cpu capacity is 2
	I0816 10:24:22.685538    4656 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 10:24:22.685541    4656 node_conditions.go:123] node cpu capacity is 2
	I0816 10:24:22.685544    4656 node_conditions.go:105] duration metric: took 183.55481ms to run NodePressure ...
	I0816 10:24:22.685552    4656 start.go:241] waiting for startup goroutines ...
	I0816 10:24:22.685571    4656 start.go:255] writing updated cluster config ...
	I0816 10:24:22.707964    4656 out.go:201] 
	I0816 10:24:22.729754    4656 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:24:22.729889    4656 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:24:22.752182    4656 out.go:177] * Starting "ha-286000-m03" control-plane node in "ha-286000" cluster
	I0816 10:24:22.794355    4656 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 10:24:22.794388    4656 cache.go:56] Caching tarball of preloaded images
	I0816 10:24:22.794595    4656 preload.go:172] Found /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0816 10:24:22.794623    4656 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 10:24:22.794796    4656 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:24:22.870926    4656 start.go:360] acquireMachinesLock for ha-286000-m03: {Name:mk0510f0432b1e24fd07d899155e85dff8756d5a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:24:22.871061    4656 start.go:364] duration metric: took 106.312µs to acquireMachinesLock for "ha-286000-m03"
	I0816 10:24:22.871092    4656 start.go:96] Skipping create...Using existing machine configuration
	I0816 10:24:22.871102    4656 fix.go:54] fixHost starting: m03
	I0816 10:24:22.871530    4656 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:24:22.871567    4656 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:24:22.881793    4656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52214
	I0816 10:24:22.882176    4656 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:24:22.882559    4656 main.go:141] libmachine: Using API Version  1
	I0816 10:24:22.882581    4656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:24:22.882800    4656 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:24:22.882926    4656 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:24:22.883020    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetState
	I0816 10:24:22.883103    4656 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:24:22.883215    4656 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid from json: 3849
	I0816 10:24:22.884141    4656 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid 3849 missing from process table
	I0816 10:24:22.884173    4656 fix.go:112] recreateIfNeeded on ha-286000-m03: state=Stopped err=<nil>
	I0816 10:24:22.884183    4656 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	W0816 10:24:22.884273    4656 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 10:24:22.934970    4656 out.go:177] * Restarting existing hyperkit VM for "ha-286000-m03" ...
	I0816 10:24:22.989195    4656 main.go:141] libmachine: (ha-286000-m03) Calling .Start
	I0816 10:24:22.989384    4656 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:24:22.989428    4656 main.go:141] libmachine: (ha-286000-m03) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/hyperkit.pid
	I0816 10:24:22.990416    4656 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid 3849 missing from process table
	I0816 10:24:22.990433    4656 main.go:141] libmachine: (ha-286000-m03) DBG | pid 3849 is in state "Stopped"
	I0816 10:24:22.990450    4656 main.go:141] libmachine: (ha-286000-m03) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/hyperkit.pid...
	I0816 10:24:22.991046    4656 main.go:141] libmachine: (ha-286000-m03) DBG | Using UUID 408efb18-ff91-428f-b414-6eb0d982a779
	I0816 10:24:23.018344    4656 main.go:141] libmachine: (ha-286000-m03) DBG | Generated MAC 8a:e:de:5b:b5:8b
	I0816 10:24:23.018367    4656 main.go:141] libmachine: (ha-286000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000
	I0816 10:24:23.018512    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:23 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"408efb18-ff91-428f-b414-6eb0d982a779", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003acb40)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 10:24:23.018542    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:23 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"408efb18-ff91-428f-b414-6eb0d982a779", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003acb40)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 10:24:23.018607    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:23 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "408efb18-ff91-428f-b414-6eb0d982a779", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/ha-286000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/bzimage,/Users/jenkins/minikube-integration/19461-1276/.minikube/machine
s/ha-286000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000"}
	I0816 10:24:23.018646    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:23 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 408efb18-ff91-428f-b414-6eb0d982a779 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/ha-286000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/bzimage,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000"
	I0816 10:24:23.018659    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:23 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0816 10:24:23.019982    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:23 DEBUG: hyperkit: Pid is 4694
	I0816 10:24:23.020375    4656 main.go:141] libmachine: (ha-286000-m03) DBG | Attempt 0
	I0816 10:24:23.020392    4656 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:24:23.020487    4656 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid from json: 4694
	I0816 10:24:23.022453    4656 main.go:141] libmachine: (ha-286000-m03) DBG | Searching for 8a:e:de:5b:b5:8b in /var/db/dhcpd_leases ...
	I0816 10:24:23.022498    4656 main.go:141] libmachine: (ha-286000-m03) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0816 10:24:23.022517    4656 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:24:23.022531    4656 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:24:23.022542    4656 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:24:23.022552    4656 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66c0d7f9}
	I0816 10:24:23.022566    4656 main.go:141] libmachine: (ha-286000-m03) DBG | Found match: 8a:e:de:5b:b5:8b
	I0816 10:24:23.022574    4656 main.go:141] libmachine: (ha-286000-m03) DBG | IP: 192.169.0.7
	I0816 10:24:23.022592    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetConfigRaw
	I0816 10:24:23.023252    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetIP
	I0816 10:24:23.023444    4656 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:24:23.023931    4656 machine.go:93] provisionDockerMachine start ...
	I0816 10:24:23.023941    4656 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:24:23.024079    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:24:23.024190    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:24:23.024302    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:24:23.024432    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:24:23.024554    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:24:23.024692    4656 main.go:141] libmachine: Using SSH client type: native
	I0816 10:24:23.024832    4656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e87ea0] 0x3e8ac00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:24:23.024839    4656 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 10:24:23.028441    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:23 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0816 10:24:23.037003    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:23 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0816 10:24:23.038503    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:24:23.038539    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:24:23.038554    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:24:23.038589    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:24:23.422756    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:23 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0816 10:24:23.422770    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:23 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0816 10:24:23.537534    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:24:23.537554    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:24:23.537563    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:24:23.537570    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:24:23.538449    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:23 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0816 10:24:23.538460    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:23 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0816 10:24:29.168490    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:29 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0816 10:24:29.168581    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:29 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0816 10:24:29.168594    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:29 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0816 10:24:29.192004    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:29 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0816 10:24:58.091940    4656 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 10:24:58.091955    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetMachineName
	I0816 10:24:58.092103    4656 buildroot.go:166] provisioning hostname "ha-286000-m03"
	I0816 10:24:58.092114    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetMachineName
	I0816 10:24:58.092224    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:24:58.092330    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:24:58.092419    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:24:58.092518    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:24:58.092626    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:24:58.092758    4656 main.go:141] libmachine: Using SSH client type: native
	I0816 10:24:58.092916    4656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e87ea0] 0x3e8ac00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:24:58.092925    4656 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-286000-m03 && echo "ha-286000-m03" | sudo tee /etc/hostname
	I0816 10:24:58.165459    4656 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-286000-m03
	
	I0816 10:24:58.165475    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:24:58.165609    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:24:58.165705    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:24:58.165800    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:24:58.165888    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:24:58.166012    4656 main.go:141] libmachine: Using SSH client type: native
	I0816 10:24:58.166160    4656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e87ea0] 0x3e8ac00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:24:58.166171    4656 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-286000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-286000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-286000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 10:24:58.234524    4656 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 10:24:58.234539    4656 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19461-1276/.minikube CaCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19461-1276/.minikube}
	I0816 10:24:58.234548    4656 buildroot.go:174] setting up certificates
	I0816 10:24:58.234555    4656 provision.go:84] configureAuth start
	I0816 10:24:58.234562    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetMachineName
	I0816 10:24:58.234691    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetIP
	I0816 10:24:58.234792    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:24:58.234865    4656 provision.go:143] copyHostCerts
	I0816 10:24:58.234895    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem
	I0816 10:24:58.234961    4656 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem, removing ...
	I0816 10:24:58.234967    4656 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem
	I0816 10:24:58.235111    4656 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem (1078 bytes)
	I0816 10:24:58.235314    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem
	I0816 10:24:58.235356    4656 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem, removing ...
	I0816 10:24:58.235361    4656 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem
	I0816 10:24:58.235442    4656 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem (1123 bytes)
	I0816 10:24:58.235582    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem
	I0816 10:24:58.235624    4656 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem, removing ...
	I0816 10:24:58.235629    4656 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem
	I0816 10:24:58.235704    4656 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem (1679 bytes)
	I0816 10:24:58.235845    4656 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem org=jenkins.ha-286000-m03 san=[127.0.0.1 192.169.0.7 ha-286000-m03 localhost minikube]
	I0816 10:24:58.291944    4656 provision.go:177] copyRemoteCerts
	I0816 10:24:58.291996    4656 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 10:24:58.292012    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:24:58.292152    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:24:58.292249    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:24:58.292325    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:24:58.292403    4656 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/id_rsa Username:docker}
	I0816 10:24:58.328961    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0816 10:24:58.329060    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0816 10:24:58.348824    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0816 10:24:58.348900    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0816 10:24:58.369137    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0816 10:24:58.369210    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 10:24:58.388899    4656 provision.go:87] duration metric: took 154.336521ms to configureAuth
	I0816 10:24:58.388918    4656 buildroot.go:189] setting minikube options for container-runtime
	I0816 10:24:58.389098    4656 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:24:58.389135    4656 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:24:58.389270    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:24:58.389362    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:24:58.389460    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:24:58.389543    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:24:58.389622    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:24:58.389731    4656 main.go:141] libmachine: Using SSH client type: native
	I0816 10:24:58.389859    4656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e87ea0] 0x3e8ac00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:24:58.389867    4656 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0816 10:24:58.452406    4656 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0816 10:24:58.452425    4656 buildroot.go:70] root file system type: tmpfs
	I0816 10:24:58.452504    4656 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0816 10:24:58.452516    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:24:58.452651    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:24:58.452745    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:24:58.452844    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:24:58.452943    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:24:58.453082    4656 main.go:141] libmachine: Using SSH client type: native
	I0816 10:24:58.453228    4656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e87ea0] 0x3e8ac00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:24:58.453271    4656 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0816 10:24:58.524937    4656 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0816 10:24:58.524958    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:24:58.525096    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:24:58.525191    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:24:58.525277    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:24:58.525354    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:24:58.525485    4656 main.go:141] libmachine: Using SSH client type: native
	I0816 10:24:58.525630    4656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e87ea0] 0x3e8ac00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:24:58.525643    4656 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0816 10:25:00.070144    4656 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0816 10:25:00.070159    4656 machine.go:96] duration metric: took 37.04784939s to provisionDockerMachine
	I0816 10:25:00.070167    4656 start.go:293] postStartSetup for "ha-286000-m03" (driver="hyperkit")
	I0816 10:25:00.070174    4656 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 10:25:00.070189    4656 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:25:00.070367    4656 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 10:25:00.070380    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:25:00.070472    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:25:00.070550    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:25:00.070650    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:25:00.070738    4656 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/id_rsa Username:docker}
	I0816 10:25:00.107373    4656 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 10:25:00.110616    4656 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 10:25:00.110628    4656 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19461-1276/.minikube/addons for local assets ...
	I0816 10:25:00.110727    4656 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19461-1276/.minikube/files for local assets ...
	I0816 10:25:00.110900    4656 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> 18312.pem in /etc/ssl/certs
	I0816 10:25:00.110906    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> /etc/ssl/certs/18312.pem
	I0816 10:25:00.111116    4656 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 10:25:00.118270    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem --> /etc/ssl/certs/18312.pem (1708 bytes)
	I0816 10:25:00.138002    4656 start.go:296] duration metric: took 67.828962ms for postStartSetup
	I0816 10:25:00.138023    4656 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:25:00.138205    4656 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0816 10:25:00.138223    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:25:00.138316    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:25:00.138399    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:25:00.138484    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:25:00.138558    4656 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/id_rsa Username:docker}
	I0816 10:25:00.176923    4656 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0816 10:25:00.176990    4656 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0816 10:25:00.228121    4656 fix.go:56] duration metric: took 37.358659467s for fixHost
	I0816 10:25:00.228163    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:25:00.228436    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:25:00.228658    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:25:00.228845    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:25:00.229035    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:25:00.229265    4656 main.go:141] libmachine: Using SSH client type: native
	I0816 10:25:00.229477    4656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e87ea0] 0x3e8ac00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:25:00.229490    4656 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 10:25:00.290756    4656 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723829100.434156000
	
	I0816 10:25:00.290771    4656 fix.go:216] guest clock: 1723829100.434156000
	I0816 10:25:00.290778    4656 fix.go:229] Guest: 2024-08-16 10:25:00.434156 -0700 PDT Remote: 2024-08-16 10:25:00.228148 -0700 PDT m=+88.850268934 (delta=206.008ms)
	I0816 10:25:00.290788    4656 fix.go:200] guest clock delta is within tolerance: 206.008ms
	I0816 10:25:00.290792    4656 start.go:83] releasing machines lock for "ha-286000-m03", held for 37.421364862s
	I0816 10:25:00.290808    4656 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:25:00.290938    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetIP
	I0816 10:25:00.313666    4656 out.go:177] * Found network options:
	I0816 10:25:00.334418    4656 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6
	W0816 10:25:00.355435    4656 proxy.go:119] fail to check proxy env: Error ip not in block
	W0816 10:25:00.355461    4656 proxy.go:119] fail to check proxy env: Error ip not in block
	I0816 10:25:00.355478    4656 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:25:00.356143    4656 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:25:00.356356    4656 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:25:00.356474    4656 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 10:25:00.356513    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	W0816 10:25:00.356569    4656 proxy.go:119] fail to check proxy env: Error ip not in block
	W0816 10:25:00.356590    4656 proxy.go:119] fail to check proxy env: Error ip not in block
	I0816 10:25:00.356679    4656 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0816 10:25:00.356698    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:25:00.356711    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:25:00.356905    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:25:00.356940    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:25:00.357121    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:25:00.357153    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:25:00.357335    4656 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/id_rsa Username:docker}
	I0816 10:25:00.357342    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:25:00.357519    4656 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/id_rsa Username:docker}
	W0816 10:25:00.391006    4656 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 10:25:00.391060    4656 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 10:25:00.439137    4656 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 10:25:00.439154    4656 start.go:495] detecting cgroup driver to use...
	I0816 10:25:00.439231    4656 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 10:25:00.454661    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0816 10:25:00.463185    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0816 10:25:00.471601    4656 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0816 10:25:00.471658    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0816 10:25:00.480421    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 10:25:00.488812    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0816 10:25:00.497664    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 10:25:00.506080    4656 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 10:25:00.514726    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0816 10:25:00.523293    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0816 10:25:00.531650    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0816 10:25:00.540020    4656 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 10:25:00.547503    4656 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 10:25:00.555089    4656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:25:00.643202    4656 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0816 10:25:00.663102    4656 start.go:495] detecting cgroup driver to use...
	I0816 10:25:00.663170    4656 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0816 10:25:00.680492    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 10:25:00.693170    4656 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 10:25:00.707541    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 10:25:00.718044    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 10:25:00.728609    4656 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0816 10:25:00.747431    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 10:25:00.757669    4656 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 10:25:00.772722    4656 ssh_runner.go:195] Run: which cri-dockerd
	I0816 10:25:00.775964    4656 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0816 10:25:00.783500    4656 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0816 10:25:00.797291    4656 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0816 10:25:00.889940    4656 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0816 10:25:00.996518    4656 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0816 10:25:00.996540    4656 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0816 10:25:01.010228    4656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:25:01.104164    4656 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0816 10:25:03.365849    4656 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.261743451s)
	I0816 10:25:03.365910    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0816 10:25:03.376096    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0816 10:25:03.386222    4656 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0816 10:25:03.479109    4656 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0816 10:25:03.594325    4656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:25:03.706928    4656 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0816 10:25:03.721224    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0816 10:25:03.732283    4656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:25:03.827894    4656 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0816 10:25:03.888066    4656 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0816 10:25:03.888145    4656 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0816 10:25:03.893520    4656 start.go:563] Will wait 60s for crictl version
	I0816 10:25:03.893575    4656 ssh_runner.go:195] Run: which crictl
	I0816 10:25:03.896917    4656 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 10:25:03.925631    4656 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0816 10:25:03.925712    4656 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0816 10:25:03.944598    4656 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0816 10:25:03.985082    4656 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0816 10:25:04.029274    4656 out.go:177]   - env NO_PROXY=192.169.0.5
	I0816 10:25:04.051107    4656 out.go:177]   - env NO_PROXY=192.169.0.5,192.169.0.6
	I0816 10:25:04.072084    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetIP
	I0816 10:25:04.072364    4656 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0816 10:25:04.075855    4656 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 10:25:04.085745    4656 mustload.go:65] Loading cluster: ha-286000
	I0816 10:25:04.085928    4656 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:25:04.086156    4656 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:25:04.086178    4656 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:25:04.095096    4656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52236
	I0816 10:25:04.095437    4656 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:25:04.095780    4656 main.go:141] libmachine: Using API Version  1
	I0816 10:25:04.095794    4656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:25:04.095992    4656 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:25:04.096098    4656 main.go:141] libmachine: (ha-286000) Calling .GetState
	I0816 10:25:04.096178    4656 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:25:04.096257    4656 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 4669
	I0816 10:25:04.097216    4656 host.go:66] Checking if "ha-286000" exists ...
	I0816 10:25:04.097478    4656 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:25:04.097503    4656 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:25:04.106283    4656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52238
	I0816 10:25:04.106623    4656 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:25:04.106944    4656 main.go:141] libmachine: Using API Version  1
	I0816 10:25:04.106954    4656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:25:04.107151    4656 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:25:04.107299    4656 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:25:04.107413    4656 certs.go:68] Setting up /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000 for IP: 192.169.0.7
	I0816 10:25:04.107420    4656 certs.go:194] generating shared ca certs ...
	I0816 10:25:04.107432    4656 certs.go:226] acquiring lock for ca certs: {Name:mka549fbc467849834d2267da204028c49d88a3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:25:04.107603    4656 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key
	I0816 10:25:04.107673    4656 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key
	I0816 10:25:04.107682    4656 certs.go:256] generating profile certs ...
	I0816 10:25:04.107801    4656 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.key
	I0816 10:25:04.107821    4656 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.fbb2b423
	I0816 10:25:04.107836    4656 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.fbb2b423 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.7 192.169.0.254]
	I0816 10:25:04.288936    4656 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.fbb2b423 ...
	I0816 10:25:04.288952    4656 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.fbb2b423: {Name:mk5b5d381df2e0229dfa97b94f9501ac61e1f4af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:25:04.289301    4656 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.fbb2b423 ...
	I0816 10:25:04.289309    4656 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.fbb2b423: {Name:mk1c231c3478673ccffbd14f4f0c5e31373f1228 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:25:04.289510    4656 certs.go:381] copying /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.fbb2b423 -> /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt
	I0816 10:25:04.289730    4656 certs.go:385] copying /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.fbb2b423 -> /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key
	I0816 10:25:04.289982    4656 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key
	I0816 10:25:04.289991    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0816 10:25:04.290020    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0816 10:25:04.290039    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0816 10:25:04.290058    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0816 10:25:04.290076    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0816 10:25:04.290101    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0816 10:25:04.290120    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0816 10:25:04.290144    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0816 10:25:04.290239    4656 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem (1338 bytes)
	W0816 10:25:04.290288    4656 certs.go:480] ignoring /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831_empty.pem, impossibly tiny 0 bytes
	I0816 10:25:04.290297    4656 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 10:25:04.290334    4656 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem (1078 bytes)
	I0816 10:25:04.290369    4656 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem (1123 bytes)
	I0816 10:25:04.290397    4656 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem (1679 bytes)
	I0816 10:25:04.290469    4656 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem (1708 bytes)
	I0816 10:25:04.290504    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem -> /usr/share/ca-certificates/1831.pem
	I0816 10:25:04.290530    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> /usr/share/ca-certificates/18312.pem
	I0816 10:25:04.290551    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:25:04.290581    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:25:04.290714    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:25:04.290801    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:25:04.290889    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:25:04.290979    4656 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:25:04.320175    4656 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0816 10:25:04.323948    4656 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0816 10:25:04.332572    4656 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0816 10:25:04.335881    4656 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0816 10:25:04.344208    4656 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0816 10:25:04.347261    4656 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0816 10:25:04.355353    4656 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0816 10:25:04.358754    4656 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0816 10:25:04.367226    4656 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0816 10:25:04.370644    4656 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0816 10:25:04.379014    4656 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0816 10:25:04.382464    4656 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0816 10:25:04.390940    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 10:25:04.411283    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0816 10:25:04.431206    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 10:25:04.451054    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 10:25:04.470415    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0816 10:25:04.490122    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 10:25:04.509717    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 10:25:04.529383    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 10:25:04.549154    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem --> /usr/share/ca-certificates/1831.pem (1338 bytes)
	I0816 10:25:04.568985    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem --> /usr/share/ca-certificates/18312.pem (1708 bytes)
	I0816 10:25:04.588519    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 10:25:04.607970    4656 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0816 10:25:04.621401    4656 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0816 10:25:04.635625    4656 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0816 10:25:04.649570    4656 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0816 10:25:04.663171    4656 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0816 10:25:04.676495    4656 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0816 10:25:04.690056    4656 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0816 10:25:04.703786    4656 ssh_runner.go:195] Run: openssl version
	I0816 10:25:04.707923    4656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1831.pem && ln -fs /usr/share/ca-certificates/1831.pem /etc/ssl/certs/1831.pem"
	I0816 10:25:04.716268    4656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1831.pem
	I0816 10:25:04.719659    4656 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 16:57 /usr/share/ca-certificates/1831.pem
	I0816 10:25:04.719702    4656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1831.pem
	I0816 10:25:04.723849    4656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1831.pem /etc/ssl/certs/51391683.0"
	I0816 10:25:04.732246    4656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18312.pem && ln -fs /usr/share/ca-certificates/18312.pem /etc/ssl/certs/18312.pem"
	I0816 10:25:04.740650    4656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18312.pem
	I0816 10:25:04.743948    4656 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 16:57 /usr/share/ca-certificates/18312.pem
	I0816 10:25:04.743983    4656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18312.pem
	I0816 10:25:04.748103    4656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18312.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 10:25:04.756745    4656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 10:25:04.765039    4656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:25:04.768354    4656 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 16:48 /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:25:04.768417    4656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:25:04.772556    4656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 10:25:04.781063    4656 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 10:25:04.784249    4656 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0816 10:25:04.784287    4656 kubeadm.go:934] updating node {m03 192.169.0.7 8443 v1.31.0 docker true true} ...
	I0816 10:25:04.784343    4656 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-286000-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-286000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 10:25:04.784359    4656 kube-vip.go:115] generating kube-vip config ...
	I0816 10:25:04.784396    4656 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0816 10:25:04.796986    4656 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0816 10:25:04.797028    4656 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0816 10:25:04.797080    4656 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 10:25:04.805783    4656 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0816 10:25:04.805828    4656 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0816 10:25:04.815857    4656 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0816 10:25:04.815860    4656 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256
	I0816 10:25:04.815857    4656 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256
	I0816 10:25:04.815875    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0816 10:25:04.815878    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0816 10:25:04.815911    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 10:25:04.815963    4656 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0816 10:25:04.815967    4656 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0816 10:25:04.819783    4656 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0816 10:25:04.819808    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0816 10:25:04.819886    4656 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0816 10:25:04.819905    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0816 10:25:04.838560    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0816 10:25:04.838690    4656 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0816 10:25:04.892677    4656 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0816 10:25:04.892722    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0816 10:25:05.452270    4656 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0816 10:25:05.460515    4656 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0816 10:25:05.473974    4656 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 10:25:05.487288    4656 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0816 10:25:05.501421    4656 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0816 10:25:05.504340    4656 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 10:25:05.514511    4656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:25:05.610695    4656 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 10:25:05.627113    4656 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:25:05.627365    4656 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:25:05.650018    4656 out.go:177] * Verifying Kubernetes components...
	I0816 10:25:05.671252    4656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:25:05.770878    4656 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 10:25:06.484588    4656 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19461-1276/kubeconfig
	I0816 10:25:06.484787    4656 kapi.go:59] client config for ha-286000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.key", CAFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x5540f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0816 10:25:06.484828    4656 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0816 10:25:06.484987    4656 node_ready.go:35] waiting up to 6m0s for node "ha-286000-m03" to be "Ready" ...
	I0816 10:25:06.485034    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:06.485039    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:06.485045    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:06.485048    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:06.487783    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:06.985311    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:06.985336    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:06.985348    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:06.985354    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:06.989349    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:07.485490    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:07.485513    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:07.485524    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:07.485529    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:07.489016    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:07.985178    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:07.985193    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:07.985199    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:07.985202    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:07.987679    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:08.487278    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:08.487300    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:08.487309    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:08.487315    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:08.491486    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:25:08.491567    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:08.987160    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:08.987184    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:08.987194    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:08.987200    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:08.990942    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:09.485053    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:09.485101    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:09.485109    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:09.485113    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:09.487562    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:09.985592    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:09.985671    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:09.985687    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:09.985696    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:09.989637    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:10.486025    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:10.486050    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:10.486061    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:10.486067    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:10.489557    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:10.985086    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:10.985127    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:10.985134    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:10.985139    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:10.987914    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:10.987975    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:11.485153    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:11.485176    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:11.485186    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:11.485193    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:11.488752    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:11.986139    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:11.986154    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:11.986162    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:11.986166    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:11.989386    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:12.485803    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:12.485849    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:12.485865    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:12.485870    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:12.489472    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:12.986570    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:12.986596    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:12.986607    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:12.986612    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:12.990236    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:12.990376    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:13.484914    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:13.484926    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:13.484932    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:13.484935    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:13.488977    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:25:13.986680    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:13.986696    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:13.986702    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:13.986705    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:13.989158    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:14.486321    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:14.486382    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:14.486402    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:14.486412    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:14.491203    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:25:14.985877    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:14.985901    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:14.985912    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:14.985949    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:14.989703    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:15.485277    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:15.485292    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:15.485299    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:15.485302    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:15.487768    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:15.487830    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:15.985642    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:15.985663    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:15.985675    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:15.985680    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:15.989433    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:16.484901    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:16.484927    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:16.484939    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:16.484944    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:16.488779    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:16.986034    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:16.986047    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:16.986054    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:16.986062    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:16.988709    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:17.486864    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:17.486887    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:17.486924    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:17.486931    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:17.490473    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:17.490551    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:17.985889    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:17.985909    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:17.985921    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:17.985925    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:17.989836    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:18.485398    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:18.485414    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:18.485421    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:18.485425    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:18.487889    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:18.985349    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:18.985378    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:18.985436    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:18.985442    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:18.988422    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:19.485081    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:19.485102    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:19.485113    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:19.485121    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:19.488852    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:19.985049    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:19.985062    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:19.985081    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:19.985085    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:19.987210    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:19.987270    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:20.484914    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:20.484939    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:20.484949    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:20.484954    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:20.488695    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:20.985203    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:20.985229    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:20.985239    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:20.985245    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:20.989283    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:25:21.484963    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:21.484979    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:21.484985    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:21.484989    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:21.487275    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:21.985755    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:21.985782    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:21.985793    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:21.985798    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:21.989914    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:25:21.989997    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:22.485717    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:22.485745    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:22.485824    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:22.485835    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:22.489667    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:22.985286    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:22.985301    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:22.985307    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:22.985318    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:22.987903    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:23.485546    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:23.485567    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:23.485578    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:23.485583    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:23.489380    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:23.985686    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:23.985757    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:23.985777    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:23.985792    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:23.989466    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:24.484557    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:24.484568    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:24.484575    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:24.484578    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:24.487089    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:24.487151    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:24.985579    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:24.985600    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:24.985609    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:24.985614    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:24.989536    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:25.485541    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:25.485564    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:25.485576    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:25.485583    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:25.489272    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:25.984513    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:25.984529    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:25.984536    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:25.984540    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:25.987095    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:26.486003    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:26.486022    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:26.486034    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:26.486043    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:26.489357    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:26.489445    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:26.985326    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:26.985345    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:26.985357    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:26.985363    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:26.988993    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:27.484603    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:27.484616    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:27.484621    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:27.484625    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:27.486943    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:27.984825    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:27.984844    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:27.984855    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:27.984861    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:27.988691    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:28.486230    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:28.486245    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:28.486253    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:28.486259    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:28.491735    4656 round_trippers.go:574] Response Status: 404 Not Found in 5 milliseconds
	I0816 10:25:28.491792    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:28.985268    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:28.985287    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:28.985315    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:28.985319    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:28.987718    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:29.485335    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:29.485355    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:29.485367    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:29.485372    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:29.488781    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:29.984712    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:29.984727    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:29.984736    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:29.984740    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:29.987128    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:30.484437    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:30.484448    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:30.484454    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:30.484457    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:30.487047    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:30.984627    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:30.984648    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:30.984659    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:30.984665    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:30.988084    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:30.988236    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:31.486364    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:31.486416    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:31.486431    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:31.486464    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:31.489760    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:31.985027    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:31.985041    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:31.985048    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:31.985052    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:31.987323    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:32.486368    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:32.486394    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:32.486407    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:32.486413    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:32.490571    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:25:32.984941    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:32.984966    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:32.984978    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:32.984984    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:32.988672    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:32.988757    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:33.484801    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:33.484813    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:33.484818    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:33.484823    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:33.487037    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:33.985797    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:33.985821    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:33.985834    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:33.985843    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:33.989368    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:34.484289    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:34.484304    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:34.484313    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:34.484318    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:34.486642    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:34.985159    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:34.985174    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:34.985181    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:34.985184    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:34.987765    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:35.484974    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:35.484995    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:35.485006    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:35.485012    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:35.488175    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:35.488288    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:35.984879    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:35.984901    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:35.984913    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:35.984918    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:35.988822    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:36.485651    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:36.485664    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:36.485671    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:36.485673    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:36.488116    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:36.985565    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:36.985584    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:36.985595    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:36.985601    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:36.989216    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:37.485779    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:37.485862    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:37.485877    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:37.485882    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:37.489350    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:37.489427    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:37.984128    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:37.984140    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:37.984146    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:37.984150    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:37.986646    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:38.485023    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:38.485039    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:38.485048    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:38.485052    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:38.487768    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:38.984183    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:38.984206    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:38.984261    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:38.984269    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:38.987325    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:39.485275    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:39.485321    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:39.485334    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:39.485338    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:39.487742    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:39.985699    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:39.985718    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:39.985729    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:39.985737    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:39.988773    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:39.988844    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:40.484531    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:40.484546    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:40.484554    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:40.484559    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:40.487018    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:40.985498    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:40.985513    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:40.985520    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:40.985524    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:40.987999    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:41.484329    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:41.484342    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:41.484347    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:41.484351    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:41.486849    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:41.984847    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:41.984871    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:41.984882    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:41.984889    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:41.988357    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:42.484908    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:42.484921    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:42.484927    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:42.484931    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:42.487626    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:42.487688    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:42.985273    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:42.985299    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:42.985311    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:42.985325    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:42.988684    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:43.485086    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:43.485111    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:43.485128    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:43.485134    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:43.488939    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:43.983910    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:43.983926    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:43.983933    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:43.983936    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:43.986292    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:44.484259    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:44.484279    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:44.484291    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:44.484328    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:44.486962    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:44.984437    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:44.984457    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:44.984467    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:44.984475    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:44.987835    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:44.987961    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:45.484938    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:45.484953    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:45.484961    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:45.484964    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:45.487461    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:45.985086    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:45.985109    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:45.985119    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:45.985124    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:45.988699    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:46.484276    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:46.484299    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:46.484311    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:46.484319    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:46.488509    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:25:46.983907    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:46.983920    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:46.983926    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:46.983929    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:46.986359    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:47.485117    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:47.485136    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:47.485145    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:47.485150    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:47.487992    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:47.488052    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:47.984816    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:47.984870    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:47.984882    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:47.984891    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:47.988129    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:48.483883    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:48.483900    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:48.483906    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:48.483911    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:48.486198    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:48.984169    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:48.984190    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:48.984203    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:48.984208    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:48.987942    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:49.484903    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:49.484919    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:49.484927    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:49.484933    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:49.487106    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:49.984353    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:49.984369    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:49.984375    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:49.984378    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:49.987041    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:49.987105    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:50.485525    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:50.485573    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:50.485599    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:50.485608    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:50.489590    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:50.983824    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:50.983847    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:50.983858    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:50.983864    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:50.987088    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:51.484527    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:51.484553    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:51.484560    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:51.484563    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:51.489758    4656 round_trippers.go:574] Response Status: 404 Not Found in 5 milliseconds
	I0816 10:25:51.984190    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:51.984202    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:51.984208    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:51.984212    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:51.986039    4656 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0816 10:25:52.484065    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:52.484112    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:52.484125    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:52.484132    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:52.487172    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:52.487316    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:52.984150    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:52.984166    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:52.984173    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:52.984175    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:52.986345    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:53.484269    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:53.484284    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:53.484293    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:53.484296    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:53.486726    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:53.985717    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:53.985742    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:53.985759    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:53.985765    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:53.989726    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:54.484319    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:54.484335    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:54.484342    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:54.484345    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:54.486811    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:54.984778    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:54.984800    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:54.984808    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:54.984812    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:54.987368    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:54.987445    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:55.484244    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:55.484267    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:55.484278    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:55.484286    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:55.488016    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:55.985068    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:55.985083    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:55.985090    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:55.985093    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:55.987495    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:56.484782    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:56.484807    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:56.484819    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:56.484826    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:56.488310    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:56.984397    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:56.984419    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:56.984431    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:56.984439    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:56.988216    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:56.988289    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:57.483589    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:57.483605    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:57.483611    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:57.483616    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:57.486165    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:57.985574    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:57.985599    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:57.985611    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:57.985616    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:57.989363    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:58.484270    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:58.484308    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:58.484320    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:58.484325    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:58.487918    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:58.983666    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:58.983682    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:58.983689    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:58.983697    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:58.985851    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:59.483521    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:59.483543    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:59.483554    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:59.483560    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:59.487399    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:59.487469    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:59.984232    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:59.984247    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:59.984255    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:59.984260    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:59.986963    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:00.483820    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:00.483833    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:00.483839    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:00.483842    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:00.486243    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:00.983904    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:00.983929    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:00.983941    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:00.983945    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:00.988101    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:26:01.484375    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:01.484399    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:01.484411    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:01.484448    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:01.488415    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:01.488502    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:01.983385    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:01.983401    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:01.983408    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:01.983411    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:01.985938    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:02.483425    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:02.483445    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:02.483457    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:02.483465    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:02.487166    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:02.984027    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:02.984094    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:02.984108    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:02.984117    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:02.987822    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:03.483320    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:03.483335    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:03.483341    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:03.483344    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:03.485639    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:03.985036    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:03.985059    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:03.985073    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:03.985077    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:03.988791    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:03.988858    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:04.483621    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:04.483639    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:04.483651    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:04.483658    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:04.487066    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:04.983859    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:04.983875    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:04.983882    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:04.983886    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:04.986493    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:05.483389    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:05.483408    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:05.483418    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:05.483422    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:05.486586    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:05.984366    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:05.984385    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:05.984397    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:05.984404    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:05.988161    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:06.483211    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:06.483226    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:06.483232    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:06.483235    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:06.485660    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:06.485720    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:06.983347    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:06.983366    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:06.983377    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:06.983386    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:06.986526    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:07.484090    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:07.484111    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:07.484123    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:07.484128    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:07.488198    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:26:07.983724    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:07.983740    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:07.983747    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:07.983750    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:07.986537    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:08.484146    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:08.484166    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:08.484178    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:08.484183    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:08.487983    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:08.488057    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:08.984192    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:08.984213    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:08.984224    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:08.984229    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:08.988294    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:26:09.484029    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:09.484043    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:09.484049    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:09.484052    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:09.486705    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:09.985246    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:09.985271    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:09.985283    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:09.985288    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:09.989175    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:10.483317    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:10.483343    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:10.483354    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:10.483360    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:10.486962    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:10.983808    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:10.983827    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:10.983852    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:10.983857    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:10.986240    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:10.986308    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:11.483336    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:11.483358    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:11.483369    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:11.483379    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:11.486931    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:11.984519    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:11.984661    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:11.984687    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:11.984697    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:11.988638    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:12.484861    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:12.484877    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:12.484886    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:12.484889    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:12.487390    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:12.983427    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:12.983451    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:12.983463    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:12.983469    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:12.986694    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:12.986771    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:13.484765    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:13.484792    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:13.484805    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:13.484811    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:13.488619    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:13.983338    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:13.983352    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:13.983393    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:13.983399    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:13.985734    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:14.483998    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:14.484020    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:14.484032    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:14.484040    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:14.487538    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:14.984976    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:14.985003    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:14.985019    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:14.985025    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:14.988674    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:14.988745    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:15.483186    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:15.483201    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:15.483208    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:15.483212    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:15.485667    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:15.983775    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:15.983787    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:15.983794    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:15.983798    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:15.986102    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:16.483426    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:16.483449    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:16.483465    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:16.483473    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:16.487194    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:16.983030    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:16.983043    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:16.983049    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:16.983053    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:16.986507    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:17.484904    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:17.484932    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:17.484944    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:17.484951    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:17.488809    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:17.488909    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:17.983661    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:17.983682    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:17.983691    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:17.983700    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:17.987560    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:18.483005    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:18.483019    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:18.483043    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:18.483047    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:18.485247    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:18.982837    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:18.982858    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:18.982870    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:18.982877    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:18.986275    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:19.484274    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:19.484305    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:19.484343    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:19.484351    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:19.488293    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:19.983892    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:19.983907    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:19.983913    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:19.983917    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:19.986273    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:19.986330    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:20.483798    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:20.483825    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:20.483837    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:20.483843    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:20.487687    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:20.983298    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:20.983317    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:20.983329    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:20.983341    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:20.986753    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:21.483677    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:21.483697    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:21.483720    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:21.483722    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:21.486177    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:21.983903    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:21.983922    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:21.983934    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:21.983940    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:21.986911    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:21.986973    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:22.484112    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:22.484134    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:22.484147    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:22.484152    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:22.488262    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:26:22.983975    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:22.984028    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:22.984035    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:22.984039    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:22.986443    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:23.483009    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:23.483033    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:23.483050    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:23.483066    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:23.486675    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:23.983451    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:23.983483    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:23.983500    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:23.983511    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:23.987001    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:23.987063    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:24.483488    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:24.483536    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:24.483547    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:24.483551    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:24.485853    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:24.982731    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:24.982743    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:24.982750    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:24.982753    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:24.984789    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:25.483610    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:25.483630    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:25.483639    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:25.483645    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:25.487060    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:25.982597    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:25.982610    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:25.982622    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:25.982626    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:25.994285    4656 round_trippers.go:574] Response Status: 404 Not Found in 11 milliseconds
	I0816 10:26:25.994342    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:26.483108    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:26.483129    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:26.483141    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:26.483147    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:26.486703    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:26.984543    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:26.984561    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:26.984570    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:26.984574    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:26.987295    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:27.484057    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:27.484070    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:27.484076    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:27.484079    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:27.486438    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:27.982568    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:27.982579    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:27.982586    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:27.982589    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:27.984714    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:28.482928    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:28.482954    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:28.482966    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:28.482971    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:28.486982    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:28.487049    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:28.983984    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:28.984000    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:28.984007    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:28.984010    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:28.986187    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:29.482503    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:29.482527    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:29.482539    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:29.482545    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:29.485679    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:29.982668    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:29.982688    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:29.982700    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:29.982707    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:29.986106    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:30.483014    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:30.483035    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:30.483044    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:30.483048    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:30.485517    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:30.984509    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:30.984533    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:30.984544    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:30.984596    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:30.988289    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:30.988408    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:31.483916    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:31.483943    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:31.483981    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:31.483990    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:31.487890    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:31.982923    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:31.982943    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:31.982952    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:31.982956    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:31.985708    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:32.483569    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:32.483593    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:32.483605    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:32.483616    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:32.487327    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:32.982635    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:32.982661    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:32.982673    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:32.982679    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:32.986374    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:33.482846    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:33.482858    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:33.482872    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:33.482882    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:33.485277    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:33.485339    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:33.982793    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:33.982819    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:33.982831    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:33.982836    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:33.986153    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:34.482560    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:34.482578    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:34.482604    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:34.482610    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:34.486015    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:34.982428    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:34.982450    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:34.982463    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:34.982469    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:34.985873    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:35.483727    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:35.483740    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:35.483747    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:35.483751    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:35.485833    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:35.485894    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:35.982916    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:35.982943    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:35.982955    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:35.982965    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:35.986742    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:36.483103    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:36.483123    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:36.483132    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:36.483135    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:36.485868    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:36.982704    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:36.982762    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:36.982776    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:36.982790    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:36.986222    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:37.483468    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:37.483488    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:37.483500    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:37.483506    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:37.487244    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:37.487314    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:37.983372    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:37.983388    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:37.983394    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:37.983397    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:37.985922    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:38.483160    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:38.483179    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:38.483191    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:38.483199    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:38.486492    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:38.982468    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:38.982483    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:38.982489    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:38.982493    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:38.984866    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:39.482442    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:39.482495    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:39.482503    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:39.482507    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:39.484936    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:39.982412    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:39.982432    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:39.982441    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:39.982450    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:39.986230    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:39.986305    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:40.483055    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:40.483077    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:40.483087    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:40.483096    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:40.486444    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:40.983022    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:40.983056    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:40.983064    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:40.983068    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:40.985224    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:41.482184    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:41.482204    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:41.482215    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:41.482220    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:41.485468    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:41.983203    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:41.983227    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:41.983306    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:41.983320    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:41.987091    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:41.987171    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:42.483067    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:42.483083    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:42.483092    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:42.483096    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:42.485854    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:42.982325    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:42.982346    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:42.982358    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:42.982367    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:42.985247    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:43.482212    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:43.482232    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:43.482245    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:43.482253    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:43.485500    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:43.982210    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:43.982226    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:43.982232    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:43.982235    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:43.984789    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:44.483719    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:44.483739    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:44.483750    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:44.483758    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:44.487463    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:44.487539    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:44.984070    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:44.984094    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:44.984106    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:44.984112    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:44.987930    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:45.483159    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:45.483174    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:45.483183    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:45.483188    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:45.485689    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:45.982348    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:45.982376    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:45.982441    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:45.982451    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:45.986431    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:46.483035    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:46.483061    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:46.483073    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:46.483079    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:46.487152    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:26:46.982639    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:46.982696    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:46.982710    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:46.982717    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:46.986259    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:46.986315    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:47.482155    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:47.482188    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:47.482237    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:47.482249    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:47.485627    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:47.983982    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:47.984007    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:47.984020    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:47.984026    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:47.988122    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:26:48.482121    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:48.482168    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:48.482175    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:48.482179    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:48.484595    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:48.983532    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:48.983556    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:48.983569    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:48.983574    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:48.987409    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:48.987484    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:49.483718    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:49.483736    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:49.483748    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:49.483754    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:49.487115    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:49.982660    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:49.982682    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:49.982692    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:49.982696    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:49.985469    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:50.481995    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:50.482014    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:50.482032    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:50.482058    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:50.485582    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:50.981809    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:50.981828    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:50.981835    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:50.981839    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:50.984238    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:51.482206    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:51.482226    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:51.482236    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:51.482241    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:51.485102    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:51.485201    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:51.983488    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:51.983503    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:51.983512    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:51.983516    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:51.986249    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:52.482268    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:52.482293    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:52.482304    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:52.482311    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:52.485931    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:52.983543    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:52.983556    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:52.983562    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:52.983564    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:52.987568    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:53.482529    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:53.482553    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:53.482590    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:53.482612    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:53.486396    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:53.486481    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:53.983382    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:53.983409    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:53.983421    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:53.983426    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:53.987647    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:26:54.482288    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:54.482367    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:54.482378    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:54.482383    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:54.484925    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:54.983458    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:54.983478    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:54.983490    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:54.983497    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:54.987016    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:55.482017    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:55.482037    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:55.482048    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:55.482054    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:55.485201    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:55.983339    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:55.983353    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:55.983360    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:55.983377    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:55.985849    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:55.985910    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:56.483753    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:56.483779    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:56.483792    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:56.483798    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:56.487683    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:56.983682    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:56.983735    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:56.983749    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:56.983758    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:56.987724    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:57.481708    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:57.481724    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:57.481730    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:57.481733    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:57.483972    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:57.983723    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:57.983751    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:57.983772    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:57.983782    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:57.987662    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:57.987781    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:58.481946    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:58.481978    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:58.481989    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:58.481998    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:58.485616    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:58.982478    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:58.982494    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:58.982501    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:58.982503    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:58.984797    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:59.482635    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:59.482661    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:59.482672    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:59.482678    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:59.486199    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:59.983080    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:59.983108    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:59.983179    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:59.983189    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:59.986765    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:00.481883    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:00.481904    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:00.481916    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:00.481923    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:00.485164    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:00.485241    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:00.983581    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:00.983606    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:00.983618    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:00.983626    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:00.987095    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:01.481499    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:01.481518    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:01.481530    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:01.481536    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:01.484541    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:01.981949    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:01.981971    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:01.981980    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:01.981985    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:01.984730    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:02.483014    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:02.483039    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:02.483050    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:02.483057    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:02.486856    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:02.486952    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:02.982039    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:02.982061    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:02.982075    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:02.982083    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:02.986009    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:03.482044    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:03.482058    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:03.482064    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:03.482068    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:03.484293    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:03.982493    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:03.982521    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:03.982589    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:03.982599    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:03.986547    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:04.481423    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:04.481443    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:04.481481    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:04.481492    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:04.484534    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:04.981631    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:04.981650    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:04.981659    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:04.981665    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:04.984478    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:04.984535    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:05.481850    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:05.481876    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:05.481888    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:05.481895    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:05.485885    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:05.983485    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:05.983508    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:05.983520    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:05.983529    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:05.987747    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:27:06.481638    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:06.481654    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:06.481660    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:06.481666    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:06.483910    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:06.982417    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:06.982443    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:06.982456    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:06.982461    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:06.986711    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:27:06.986836    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:07.482901    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:07.482925    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:07.482937    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:07.482944    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:07.486790    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:07.981354    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:07.981370    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:07.981376    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:07.981380    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:07.984233    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:08.482884    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:08.482907    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:08.482918    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:08.482923    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:08.486675    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:08.983285    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:08.983308    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:08.983320    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:08.983362    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:08.987075    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:08.987178    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:09.481582    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:09.481596    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:09.481602    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:09.481615    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:09.484345    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:09.982946    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:09.982968    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:09.982980    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:09.982987    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:09.987241    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:27:10.482214    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:10.482233    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:10.482245    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:10.482250    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:10.485342    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:10.981598    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:10.981613    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:10.981647    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:10.981651    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:10.983798    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:11.481915    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:11.481938    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:11.481949    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:11.481956    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:11.485887    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:11.485960    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:11.982040    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:11.982065    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:11.982077    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:11.982085    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:11.985843    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:12.481119    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:12.481134    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:12.481140    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:12.481144    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:12.483753    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:12.983314    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:12.983335    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:12.983348    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:12.983354    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:12.987658    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:27:13.483200    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:13.483225    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:13.483237    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:13.483242    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:13.487000    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:13.487075    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:13.981082    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:13.981098    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:13.981104    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:13.981107    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:13.983666    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:14.481510    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:14.481533    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:14.481546    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:14.481553    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:14.485493    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:14.982587    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:14.982611    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:14.982623    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:14.982632    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:14.986953    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:27:15.481989    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:15.482002    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:15.482008    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:15.482011    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:15.484306    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:15.983142    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:15.983197    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:15.983212    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:15.983220    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:15.987145    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:15.987217    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:16.482640    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:16.482663    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:16.482676    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:16.482682    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:16.486588    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:16.982739    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:16.982758    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:16.982767    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:16.982771    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:16.985870    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:17.482222    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:17.482247    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:17.482259    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:17.482264    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:17.486553    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:27:17.982295    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:17.982319    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:17.982345    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:17.982355    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:17.986295    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:18.481466    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:18.481480    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:18.481501    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:18.481505    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:18.484182    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:18.484250    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:18.981829    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:18.981869    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:18.981879    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:18.981887    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:18.984310    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:19.481304    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:19.481354    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:19.481368    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:19.481374    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:19.485047    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:19.981003    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:19.981016    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:19.981022    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:19.981026    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:19.983258    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:20.482082    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:20.482099    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:20.482107    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:20.482110    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:20.484774    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:20.484831    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:20.982149    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:20.982161    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:20.982167    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:20.982171    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:20.984491    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:21.482759    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:21.482774    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:21.482784    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:21.482805    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:21.488307    4656 round_trippers.go:574] Response Status: 404 Not Found in 5 milliseconds
	I0816 10:27:21.980923    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:21.980944    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:21.980956    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:21.980962    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:21.985236    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:27:22.480954    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:22.480982    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:22.481000    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:22.481007    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:22.484623    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:22.982155    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:22.982170    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:22.982177    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:22.982183    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:22.985131    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:22.985233    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:23.481447    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:23.481473    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:23.481485    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:23.481493    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:23.485171    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:23.980807    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:23.980841    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:23.980854    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:23.980886    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:23.984726    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:24.481009    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:24.481023    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:24.481030    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:24.481033    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:24.483629    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:24.981780    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:24.981800    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:24.981812    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:24.981817    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:24.985032    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:25.482336    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:25.482370    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:25.482430    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:25.482437    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:25.486196    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:25.486271    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:25.981022    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:25.981035    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:25.981041    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:25.981048    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:25.983833    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:26.481578    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:26.481603    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:26.481614    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:26.481620    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:26.485938    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:27:26.981068    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:26.981108    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:26.981117    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:26.981122    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:26.983762    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:27.481705    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:27.481739    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:27.481747    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:27.481751    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:27.484193    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:27.981754    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:27.981779    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:27.981791    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:27.981804    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:27.985583    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:27.985651    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:28.481144    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:28.481173    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:28.481209    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:28.481216    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:28.484725    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:28.981756    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:28.981769    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:28.981776    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:28.981779    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:28.984303    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:29.481471    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:29.481547    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:29.481562    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:29.481571    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:29.484980    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:29.981350    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:29.981376    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:29.981388    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:29.981394    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:29.985134    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:30.481784    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:30.481800    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:30.481807    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:30.481810    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:30.483987    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:30.484040    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:30.981042    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:30.981064    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:30.981075    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:30.981082    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:30.985035    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:31.480553    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:31.480568    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:31.480576    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:31.480580    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:31.483746    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:31.981346    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:31.981362    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:31.981368    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:31.981372    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:31.983579    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:32.481011    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:32.481036    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:32.481048    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:32.481054    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:32.484005    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:32.484066    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:32.980838    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:32.980858    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:32.980869    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:32.980876    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:32.984769    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:33.481797    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:33.481813    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:33.481819    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:33.481822    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:33.484075    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:33.980538    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:33.980569    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:33.980581    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:33.980586    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:33.984292    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:34.480611    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:34.480633    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:34.480644    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:34.480650    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:34.484424    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:34.484495    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:34.980662    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:34.980675    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:34.980685    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:34.980688    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:34.983333    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:35.481072    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:35.481093    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:35.481104    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:35.481109    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:35.484858    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:35.980573    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:35.980600    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:35.980613    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:35.980619    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:35.984318    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:36.481723    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:36.481742    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:36.481750    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:36.481755    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:36.484525    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:36.484582    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:36.981468    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:36.981491    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:36.981534    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:36.981541    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:36.985480    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:37.481087    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:37.481115    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:37.481127    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:37.481133    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:37.484349    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:37.981606    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:37.981618    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:37.981624    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:37.981628    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:37.984174    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:38.480919    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:38.480942    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:38.480954    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:38.480960    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:38.484462    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:38.484530    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:38.980883    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:38.980958    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:38.980971    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:38.980976    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:38.985426    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:27:39.480691    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:39.480705    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:39.480711    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:39.480714    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:39.483370    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:39.980523    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:39.980543    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:39.980554    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:39.980559    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:39.983705    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:40.480857    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:40.480870    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:40.480876    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:40.480880    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:40.483015    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:40.980527    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:40.980547    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:40.980559    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:40.980566    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:40.984425    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:40.984557    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:41.480215    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:41.480250    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:41.480259    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:41.480264    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:41.482681    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:41.980221    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:41.980233    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:41.980238    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:41.980241    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:41.983101    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:42.481763    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:42.481782    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:42.481794    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:42.481801    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:42.484939    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:42.981092    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:42.981114    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:42.981125    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:42.981131    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:42.985191    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:27:42.985282    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:43.481456    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:43.481481    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:43.481493    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:43.481498    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:43.485020    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:43.981686    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:43.981734    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:43.981742    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:43.981745    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:43.984138    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:44.480895    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:44.480921    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:44.480934    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:44.480940    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:44.484864    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:44.980350    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:44.980376    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:44.980388    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:44.980394    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:44.984559    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:27:45.480493    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:45.480509    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:45.480518    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:45.480523    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:45.483088    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:45.483193    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:45.981740    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:45.981766    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:45.981778    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:45.981787    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:45.985812    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:27:46.480744    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:46.480771    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:46.480782    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:46.480788    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:46.484433    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:46.980028    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:46.980044    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:46.980052    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:46.980058    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:46.982468    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:47.480811    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:47.480834    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:47.480846    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:47.480854    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:47.484154    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:47.484225    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:47.981495    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:47.981558    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:47.981573    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:47.981579    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:47.984852    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:48.481331    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:48.481350    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:48.481357    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:48.481360    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:48.483672    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:48.981308    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:48.981334    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:48.981345    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:48.981351    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:48.987316    4656 round_trippers.go:574] Response Status: 404 Not Found in 5 milliseconds
	I0816 10:27:49.480610    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:49.480631    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:49.480642    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:49.480650    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:49.484493    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:49.484576    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:49.980270    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:49.980291    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:49.980303    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:49.980311    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:49.983514    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:50.480630    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:50.480652    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:50.480663    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:50.480672    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:50.484716    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:27:50.980998    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:50.981031    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:50.981079    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:50.981089    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:50.984717    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:51.481764    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:51.481781    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:51.481788    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:51.481792    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:51.483882    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:51.981147    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:51.981167    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:51.981178    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:51.981185    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:51.984837    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:51.984916    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:52.480088    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:52.480109    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:52.480121    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:52.480126    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:52.483987    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:52.980987    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:52.981013    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:52.981029    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:52.981059    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:52.984581    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:53.480043    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:53.480063    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:53.480084    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:53.480092    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:53.483664    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:53.980634    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:53.980693    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:53.980706    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:53.980711    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:53.984482    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:54.480029    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:54.480042    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:54.480051    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:54.480056    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:54.482803    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:54.482872    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:54.980002    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:54.980026    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:54.980038    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:54.980043    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:54.983690    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:55.480147    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:55.480213    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:55.480241    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:55.480251    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:55.484002    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:55.980804    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:55.980819    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:55.980825    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:55.980828    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:55.982902    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:56.480975    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:56.480997    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:56.481006    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:56.481011    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:56.484989    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:56.485061    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:56.980849    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:56.980870    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:56.980880    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:56.980888    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:56.984648    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:57.479708    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:57.479723    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:57.479732    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:57.479736    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:57.482298    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:57.979711    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:57.979729    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:57.979741    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:57.979746    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:57.983031    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:58.481734    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:58.481790    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:58.481805    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:58.481814    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:58.486010    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:27:58.486113    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:58.980860    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:58.980917    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:58.980929    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:58.980937    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:58.984281    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:59.480008    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:59.480075    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:59.480090    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:59.480100    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:59.483377    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:59.981599    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:59.981621    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:59.981633    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:59.981639    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:59.985606    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:00.480770    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:00.480786    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:00.480795    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:00.480798    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:00.483310    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:28:00.980781    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:00.980807    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:00.980817    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:00.980824    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:00.984773    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:00.984872    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:01.480191    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:01.480210    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:01.480218    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:01.480222    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:01.482706    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:28:01.979918    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:01.979940    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:01.979950    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:01.979955    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:01.982361    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:28:02.481286    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:02.481302    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:02.481308    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:02.481311    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:02.483655    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:28:02.980572    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:02.980632    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:02.980646    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:02.980655    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:02.984337    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:03.479541    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:03.479553    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:03.479560    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:03.479562    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:03.482043    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:28:03.482109    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:03.980816    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:03.980840    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:03.980877    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:03.980906    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:03.984861    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:04.481240    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:04.481266    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:04.481276    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:04.481282    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:04.485558    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:28:04.981353    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:04.981413    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:04.981429    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:04.981438    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:04.984812    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:05.480489    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:05.480511    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:05.480523    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:05.480528    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:05.484058    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:05.484144    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:05.979456    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:05.979471    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:05.979480    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:05.979485    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:05.981941    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:28:06.480803    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:06.480823    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:06.480834    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:06.480841    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:06.483869    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:06.980347    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:06.980368    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:06.980379    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:06.980384    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:06.983544    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:07.479393    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:07.479421    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:07.479481    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:07.479491    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:07.483249    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:07.979964    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:07.979979    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:07.979985    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:07.979988    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:07.983187    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:07.983251    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:08.479456    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:08.479474    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:08.479486    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:08.479493    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:08.483132    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:08.980053    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:08.980073    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:08.980083    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:08.980090    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:08.983933    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:09.481215    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:09.481229    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:09.481237    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:09.481242    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:09.483856    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:28:09.980082    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:09.980109    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:09.980121    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:09.980129    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:09.983657    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:09.983727    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:10.481137    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:10.481162    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:10.481171    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:10.481178    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:10.485023    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:10.979382    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:10.979406    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:10.979418    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:10.979425    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:10.982616    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:11.480878    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:11.480900    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:11.480924    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:11.480931    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:11.484400    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:11.980148    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:11.980201    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:11.980213    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:11.980220    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:11.983261    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:12.479546    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:12.479558    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:12.479564    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:12.479568    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:12.482006    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:28:12.482066    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:12.980407    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:12.980433    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:12.980446    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:12.980455    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:12.984259    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:13.481285    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:13.481304    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:13.481316    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:13.481321    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:13.484864    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:13.980948    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:13.980967    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:13.981024    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:13.981032    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:13.983792    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:28:14.480529    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:14.480592    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:14.480607    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:14.480615    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:14.485369    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:28:14.485425    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:14.980508    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:14.980528    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:14.980540    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:14.980546    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:14.984308    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:15.479351    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:15.479366    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:15.479375    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:15.479378    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:15.482333    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:28:15.979273    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:15.979296    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:15.979308    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:15.979317    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:15.983036    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:16.480267    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:16.480288    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:16.480300    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:16.480306    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:16.484104    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:16.979260    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:16.979282    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:16.979294    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:16.979302    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:16.983145    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:16.983218    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:17.479986    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:17.480012    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:17.480023    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:17.480031    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:17.483621    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:17.980230    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:17.980255    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:17.980267    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:17.980273    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:17.983388    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:18.479428    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:18.479444    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:18.479452    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:18.479457    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:18.482401    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:28:18.980054    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:18.980078    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:18.980090    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:18.980111    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:18.984291    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:28:18.984384    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:19.479204    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:19.479223    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:19.479235    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:19.479241    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:19.482609    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:19.980334    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:19.980358    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:19.980370    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:19.980376    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:19.984055    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:20.479678    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:20.479704    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:20.479716    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:20.479722    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:20.483940    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:28:20.980207    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:20.980232    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:20.980243    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:20.980248    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:20.984073    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:21.479009    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:21.479028    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:21.479039    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:21.479045    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:21.482870    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:21.482946    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:21.979028    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:21.979048    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:21.979060    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:21.979067    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:21.982782    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:22.480202    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:22.480229    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:22.480242    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:22.480248    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:22.484332    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:28:22.979809    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:22.979829    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:22.979861    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:22.979867    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:22.982210    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:28:23.480520    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:23.480541    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:23.480556    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:23.480564    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:23.484344    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:23.484415    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:23.978872    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:23.978890    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:23.978939    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:23.978947    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:23.981588    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:28:24.478940    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:24.479024    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:24.479038    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:24.479046    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:24.482719    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:24.980016    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:24.980040    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:24.980053    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:24.980061    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:24.984315    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:28:25.478940    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:25.478960    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:25.478971    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:25.478978    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:25.483052    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:28:25.979269    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:25.979296    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:25.979308    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:25.979314    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:25.983114    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:25.983257    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:26.479781    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:26.479806    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:26.479817    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:26.479828    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:26.483419    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:26.979605    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:26.979626    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:26.979637    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:26.979644    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:26.982753    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:27.479413    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:27.479438    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:27.479450    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:27.479458    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:27.483110    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:27.980825    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:27.980852    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:27.980863    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:27.980870    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:27.984767    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:27.984839    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:28.479839    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:28.479867    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:28.479880    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:28.479888    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:28.483764    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:28.978775    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:28.978797    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:28.978808    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:28.978815    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:28.982911    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:28:29.480812    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:29.480838    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:29.480848    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:29.480854    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:29.484272    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:29.980179    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:29.980196    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:29.980204    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:29.980208    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:29.983010    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:28:30.479018    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:30.479037    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:30.479056    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:30.479060    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:30.480976    4656 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0816 10:28:30.481040    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:30.979780    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:30.979800    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:30.979810    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:30.979815    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:30.983686    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:31.479047    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:31.479069    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:31.479081    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:31.479088    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:31.482916    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:31.979327    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:31.979383    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:31.979396    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:31.979406    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:31.982781    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:32.479680    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:32.479701    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:32.479712    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:32.479718    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:32.483452    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:32.483551    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:32.979627    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:32.979653    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:32.979665    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:32.979672    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:32.983502    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:33.479195    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:33.479213    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:33.479223    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:33.479231    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:33.482627    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:33.978591    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:33.978614    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:33.978669    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:33.978677    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:33.982499    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:34.478777    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:34.478796    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:34.478805    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:34.478810    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:34.481463    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:28:34.979814    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:34.979835    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:34.979847    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:34.979856    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:34.984020    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:28:34.984095    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:35.478731    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:35.478759    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:35.478769    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:35.478775    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:35.482596    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:35.979086    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:35.979114    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:35.979127    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:35.979133    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:35.982826    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:36.478524    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:36.478548    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:36.478560    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:36.478568    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:36.482514    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:36.978759    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:36.978778    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:36.978789    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:36.978795    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:36.982532    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:37.478813    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:37.478836    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:37.478848    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:37.478854    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:37.482815    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:37.483027    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:37.980493    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:37.980519    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:37.980530    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:37.980535    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:37.984193    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:38.479572    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:38.479594    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:38.479607    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:38.479615    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:38.483949    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:28:38.980347    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:38.980372    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:38.980383    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:38.980388    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:38.984077    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:39.480084    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:39.480110    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:39.480121    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:39.480127    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:39.483858    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:39.483927    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:39.978886    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:39.978908    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:39.978920    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:39.978927    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:39.982482    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:40.478804    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:40.478830    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:40.478841    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:40.478847    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:40.482793    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:40.979356    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:40.979380    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:40.979392    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:40.979401    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:40.983583    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:28:41.479873    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:41.479894    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:41.479913    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:41.479918    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:41.483490    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:41.978368    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:41.978382    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:41.978389    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:41.978393    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:41.984198    4656 round_trippers.go:574] Response Status: 404 Not Found in 5 milliseconds
	I0816 10:28:41.984261    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:42.478642    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:42.478662    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:42.478675    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:42.478681    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:42.482721    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:28:42.979333    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:42.979358    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:42.979370    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:42.979376    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:42.983591    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:28:43.478780    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:43.478803    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:43.478816    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:43.478824    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:43.482771    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:43.978807    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:43.978858    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:43.978871    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:43.978878    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:43.982183    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:44.479103    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:44.479131    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:44.479208    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:44.479217    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:44.483010    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:44.483102    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:44.980168    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:44.980193    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:44.980205    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:44.980212    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:44.984284    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:28:45.478814    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:45.478840    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:45.478851    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:45.478856    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:45.482566    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:45.978463    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:45.978490    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:45.978503    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:45.978509    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:45.982104    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:46.478332    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:46.478358    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:46.478370    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:46.478376    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:46.482196    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:46.980202    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:46.980226    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:46.980235    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:46.980242    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:46.984038    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:46.984113    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:47.480191    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:47.480236    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:47.480249    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:47.480256    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:47.483962    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:47.978487    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:47.978512    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:47.978524    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:47.978529    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:47.982450    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:48.478150    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:48.478167    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:48.478183    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:48.478192    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:48.481632    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:48.978324    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:48.978347    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:48.978359    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:48.978366    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:48.982094    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:49.479467    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:49.479488    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:49.479500    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:49.479508    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:49.483304    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:49.483387    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:49.979540    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:49.979559    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:49.979567    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:49.979571    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:49.982173    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:28:50.478844    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:50.478865    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:50.478876    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:50.478882    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:50.482687    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:50.979032    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:50.979057    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:50.979069    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:50.979075    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:50.982937    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:51.477969    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:51.477985    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:51.477992    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:51.477996    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:51.480844    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:28:51.978499    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:51.978525    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:51.978594    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:51.978604    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:51.982296    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:51.982369    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:52.478660    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:52.478681    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:52.478693    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:52.478700    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:52.482493    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:52.979157    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:52.979218    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:52.979232    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:52.979243    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:52.982949    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:53.477935    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:53.477952    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:53.477964    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:53.477971    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:53.481445    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:53.979399    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:53.979426    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:53.979437    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:53.979442    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:53.983298    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:53.983373    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:54.477959    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:54.477983    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:54.477992    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:54.478000    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:54.480818    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:28:54.977914    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:54.977928    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:54.977937    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:54.977943    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:54.980985    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:55.477939    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:55.477959    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:55.477971    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:55.477980    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:55.481823    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:55.978706    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:55.978725    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:55.978734    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:55.978740    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:55.981215    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:28:56.478017    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:56.478041    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:56.478055    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:56.478066    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:56.481827    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:56.481901    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:56.979955    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:56.979976    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:56.979987    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:56.979994    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:56.984295    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:28:57.478039    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:57.478057    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:57.478067    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:57.478073    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:57.481105    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:57.978248    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:57.978270    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:57.978283    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:57.978291    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:57.982239    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:58.477943    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:58.477971    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:58.477987    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:58.478001    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:58.481727    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:58.978661    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:58.978678    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:58.978687    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:58.978693    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:58.981579    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:28:58.981644    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:59.479830    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:59.479861    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:59.479927    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:59.479949    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:59.483371    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:59.977787    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:59.977804    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:59.977810    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:59.977813    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:59.979974    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:29:00.478024    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:29:00.478039    4656 round_trippers.go:469] Request Headers:
	I0816 10:29:00.478047    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:29:00.478051    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:29:00.480707    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:29:00.979674    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:29:00.979700    4656 round_trippers.go:469] Request Headers:
	I0816 10:29:00.979712    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:29:00.979718    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:29:00.983620    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:29:00.983742    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:29:01.478022    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:29:01.478042    4656 round_trippers.go:469] Request Headers:
	I0816 10:29:01.478053    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:29:01.478060    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:29:01.481326    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:29:01.978405    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:29:01.978425    4656 round_trippers.go:469] Request Headers:
	I0816 10:29:01.978434    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:29:01.978438    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:29:01.981188    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:29:02.479658    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:29:02.479772    4656 round_trippers.go:469] Request Headers:
	I0816 10:29:02.479790    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:29:02.479798    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:29:02.483872    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:29:02.979772    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:29:02.979794    4656 round_trippers.go:469] Request Headers:
	I0816 10:29:02.979807    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:29:02.979815    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:29:02.983496    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:29:03.477789    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:29:03.477808    4656 round_trippers.go:469] Request Headers:
	I0816 10:29:03.477817    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:29:03.477821    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:29:03.480617    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:29:03.480674    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:29:03.977650    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:29:03.977672    4656 round_trippers.go:469] Request Headers:
	I0816 10:29:03.977683    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:29:03.977689    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:29:03.981168    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:29:04.479691    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:29:04.479717    4656 round_trippers.go:469] Request Headers:
	I0816 10:29:04.479729    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:29:04.479737    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:29:04.483384    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:29:04.978063    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:29:04.978077    4656 round_trippers.go:469] Request Headers:
	I0816 10:29:04.978086    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:29:04.978091    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:29:04.980657    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:29:05.479407    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:29:05.479427    4656 round_trippers.go:469] Request Headers:
	I0816 10:29:05.479438    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:29:05.479443    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:29:05.482914    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:29:05.483084    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:29:05.979238    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:29:05.979260    4656 round_trippers.go:469] Request Headers:
	I0816 10:29:05.979272    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:29:05.979280    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:29:05.982997    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:29:06.478226    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:29:06.478251    4656 round_trippers.go:469] Request Headers:
	I0816 10:29:06.478264    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:29:06.478270    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:29:06.482103    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:29:06.482169    4656 node_ready.go:38] duration metric: took 4m0.00480463s for node "ha-286000-m03" to be "Ready" ...
	I0816 10:29:06.503388    4656 out.go:201] 
	W0816 10:29:06.524396    4656 out.go:270] X Exiting due to GUEST_START: failed to start node: adding node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	W0816 10:29:06.524419    4656 out.go:270] * 
	W0816 10:29:06.525619    4656 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 10:29:06.587617    4656 out.go:201] 
	
	
	==> Docker <==
	Aug 16 17:24:28 ha-286000 cri-dockerd[1436]: time="2024-08-16T17:24:28Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/137dbec658acee61ce1910017edb0f5b3a85b75c5e3049e8bd90f1dbefcdb1c7/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Aug 16 17:24:29 ha-286000 dockerd[1187]: time="2024-08-16T17:24:28.998809824Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 16 17:24:29 ha-286000 dockerd[1187]: time="2024-08-16T17:24:28.998948255Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 16 17:24:29 ha-286000 dockerd[1187]: time="2024-08-16T17:24:28.998962428Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:24:29 ha-286000 dockerd[1187]: time="2024-08-16T17:24:28.999102266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:24:29 ha-286000 dockerd[1187]: time="2024-08-16T17:24:29.047276534Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 16 17:24:29 ha-286000 dockerd[1187]: time="2024-08-16T17:24:29.047427124Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 16 17:24:29 ha-286000 dockerd[1187]: time="2024-08-16T17:24:29.047450862Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:24:29 ha-286000 dockerd[1187]: time="2024-08-16T17:24:29.047581008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:24:29 ha-286000 dockerd[1187]: time="2024-08-16T17:24:29.126544781Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 16 17:24:29 ha-286000 dockerd[1187]: time="2024-08-16T17:24:29.126662219Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 16 17:24:29 ha-286000 dockerd[1187]: time="2024-08-16T17:24:29.126672757Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:24:29 ha-286000 dockerd[1187]: time="2024-08-16T17:24:29.126811937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:24:40 ha-286000 dockerd[1187]: time="2024-08-16T17:24:40.084727507Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 16 17:24:40 ha-286000 dockerd[1187]: time="2024-08-16T17:24:40.084839498Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 16 17:24:40 ha-286000 dockerd[1187]: time="2024-08-16T17:24:40.084854114Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:24:40 ha-286000 dockerd[1187]: time="2024-08-16T17:24:40.085367785Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:24:59 ha-286000 dockerd[1181]: time="2024-08-16T17:24:59.347142049Z" level=info msg="ignoring event" container=0c18c93270e7a68d0392d4acb324154ee21a1f857073e107621429751d23f788 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 16 17:24:59 ha-286000 dockerd[1187]: time="2024-08-16T17:24:59.347787162Z" level=info msg="shim disconnected" id=0c18c93270e7a68d0392d4acb324154ee21a1f857073e107621429751d23f788 namespace=moby
	Aug 16 17:24:59 ha-286000 dockerd[1187]: time="2024-08-16T17:24:59.347864246Z" level=warning msg="cleaning up after shim disconnected" id=0c18c93270e7a68d0392d4acb324154ee21a1f857073e107621429751d23f788 namespace=moby
	Aug 16 17:24:59 ha-286000 dockerd[1187]: time="2024-08-16T17:24:59.347873243Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 16 17:25:12 ha-286000 dockerd[1187]: time="2024-08-16T17:25:12.082815222Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 16 17:25:12 ha-286000 dockerd[1187]: time="2024-08-16T17:25:12.082919934Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 16 17:25:12 ha-286000 dockerd[1187]: time="2024-08-16T17:25:12.082946545Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:25:12 ha-286000 dockerd[1187]: time="2024-08-16T17:25:12.083100138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8803f7012c881       6e38f40d628db                                                                                         5 minutes ago       Running             storage-provisioner       4                   fca40ed5fc112       storage-provisioner
	88937b4d9b3fc       045733566833c                                                                                         6 minutes ago       Running             kube-controller-manager   2                   b20f8615dee49       kube-controller-manager-ha-286000
	fdeb6586df346       8c811b4aec35f                                                                                         6 minutes ago       Running             busybox                   1                   137dbec658ace       busybox-7dff88458-dvmvk
	f9023c4cc7d09       12968670680f4                                                                                         6 minutes ago       Running             kindnet-cni               1                   b874afa97d609       kindnet-whqxb
	3cf3b8e6c2561       cbb01a7bd410d                                                                                         6 minutes ago       Running             coredns                   1                   0b81f15659889       coredns-6f6b679f8f-2kqjf
	0c18c93270e7a       6e38f40d628db                                                                                         6 minutes ago       Exited              storage-provisioner       3                   fca40ed5fc112       storage-provisioner
	5cf894bf46807       cbb01a7bd410d                                                                                         6 minutes ago       Running             coredns                   1                   26513e2b92d66       coredns-6f6b679f8f-rfbz7
	60feb425249e9       ad83b2ca7b09e                                                                                         6 minutes ago       Running             kube-proxy                1                   8008f00487db3       kube-proxy-w4nt2
	2d90cfc5f1d77       38af8ddebf499                                                                                         6 minutes ago       Running             kube-vip                  0                   bda0d9ff673b9       kube-vip-ha-286000
	77cac41fb9bde       2e96e5913fc06                                                                                         6 minutes ago       Running             etcd                      1                   5ee84d4289ece       etcd-ha-286000
	bcd696090d544       1766f54c897f0                                                                                         6 minutes ago       Running             kube-scheduler            1                   97f04e9e38892       kube-scheduler-ha-286000
	64b3c5f995d8d       604f5db92eaa8                                                                                         6 minutes ago       Running             kube-apiserver            4                   8d4b6b4a23609       kube-apiserver-ha-286000
	257f5b412fe2a       045733566833c                                                                                         6 minutes ago       Exited              kube-controller-manager   1                   b20f8615dee49       kube-controller-manager-ha-286000
	63b366c951f2a       604f5db92eaa8                                                                                         8 minutes ago       Exited              kube-apiserver            3                   818ee6dafe6c9       kube-apiserver-ha-286000
	5bbe7a8cc492e       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   25 minutes ago      Exited              busybox                   0                   1873ade92edb9       busybox-7dff88458-dvmvk
	bcd7170b050a5       cbb01a7bd410d                                                                                         27 minutes ago      Exited              coredns                   0                   452942e267927       coredns-6f6b679f8f-rfbz7
	60d3d03e297ce       cbb01a7bd410d                                                                                         27 minutes ago      Exited              coredns                   0                   fbd84fb813c90       coredns-6f6b679f8f-2kqjf
	2677394dcd66f       kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166              28 minutes ago      Exited              kindnet-cni               0                   afaf657e85c32       kindnet-whqxb
	81f6c96d46494       ad83b2ca7b09e                                                                                         28 minutes ago      Exited              kube-proxy                0                   dfb822fc27b5e       kube-proxy-w4nt2
	f7b2e9efdd94f       1766f54c897f0                                                                                         28 minutes ago      Exited              kube-scheduler            0                   8e2b186980aff       kube-scheduler-ha-286000
	d8dadff6cec78       2e96e5913fc06                                                                                         28 minutes ago      Exited              etcd                      0                   cdb14ff7d8896       etcd-ha-286000
	
	
	==> coredns [3cf3b8e6c256] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:58071 - 29432 "HINFO IN 269282700017442046.6298598734389881778. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.104629212s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1710767206]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (16-Aug-2024 17:24:29.307) (total time: 30004ms):
	Trace[1710767206]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30004ms (17:24:59.311)
	Trace[1710767206]: [30.004743477s] [30.004743477s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1321835322]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (16-Aug-2024 17:24:29.307) (total time: 30005ms):
	Trace[1321835322]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30004ms (17:24:59.312)
	Trace[1321835322]: [30.005483265s] [30.005483265s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[816453993]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (16-Aug-2024 17:24:29.312) (total time: 30003ms):
	Trace[816453993]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (17:24:59.315)
	Trace[816453993]: [30.003551219s] [30.003551219s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [5cf894bf4680] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:51498 - 60294 "HINFO IN 6373854949728581283.8966112489703867485. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.072467092s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[817614149]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (16-Aug-2024 17:24:29.307) (total time: 30005ms):
	Trace[817614149]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30004ms (17:24:59.311)
	Trace[817614149]: [30.005208149s] [30.005208149s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1980986726]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (16-Aug-2024 17:24:29.307) (total time: 30005ms):
	Trace[1980986726]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30004ms (17:24:59.311)
	Trace[1980986726]: [30.005923834s] [30.005923834s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1722306438]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (16-Aug-2024 17:24:29.312) (total time: 30003ms):
	Trace[1722306438]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (17:24:59.315)
	Trace[1722306438]: [30.003847815s] [30.003847815s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [60d3d03e297c] <==
	[INFO] plugin/kubernetes: Trace[1595166943]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (16-Aug-2024 17:19:50.818) (total time: 11830ms):
	Trace[1595166943]: ---"Objects listed" error:Unauthorized 11830ms (17:20:02.649)
	Trace[1595166943]: [11.830466351s] [11.830466351s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=2678": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=2678": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: Trace[852140040]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (16-Aug-2024 17:20:06.131) (total time: 10521ms):
	Trace[852140040]: ---"Objects listed" error:Unauthorized 10521ms (17:20:16.652)
	Trace[852140040]: [10.521589006s] [10.521589006s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [bcd7170b050a] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: Trace[1786059905]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (16-Aug-2024 17:20:05.425) (total time: 11223ms):
	Trace[1786059905]: ---"Objects listed" error:Unauthorized 11223ms (17:20:16.649)
	Trace[1786059905]: [11.223878813s] [11.223878813s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[ERROR] plugin/kubernetes: Unexpected error when reading response body: http2: server sent GOAWAY and closed the connection; LastStreamID=5, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: unexpected error when reading response body. Please retry. Original error: http2: server sent GOAWAY and closed the connection; LastStreamID=5, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: Trace[1902597424]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (16-Aug-2024 17:20:18.397) (total time: 12364ms):
	Trace[1902597424]: ---"Objects listed" error:unexpected error when reading response body. Please retry. Original error: http2: server sent GOAWAY and closed the connection; LastStreamID=5, ErrCode=NO_ERROR, debug="" 12364ms (17:20:30.761)
	Trace[1902597424]: [12.364669513s] [12.364669513s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: unexpected error when reading response body. Please retry. Original error: http2: server sent GOAWAY and closed the connection; LastStreamID=5, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: Unexpected error when reading response body: http2: server sent GOAWAY and closed the connection; LastStreamID=5, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: unexpected error when reading response body. Please retry. Original error: http2: server sent GOAWAY and closed the connection; LastStreamID=5, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: unexpected error when reading response body. Please retry. Original error: http2: server sent GOAWAY and closed the connection; LastStreamID=5, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-286000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-286000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd
	                    minikube.k8s.io/name=ha-286000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_16T10_02_26_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 17:02:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-286000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 17:30:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 17:29:26 +0000   Fri, 16 Aug 2024 17:22:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 17:29:26 +0000   Fri, 16 Aug 2024 17:22:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 17:29:26 +0000   Fri, 16 Aug 2024 17:22:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 17:29:26 +0000   Fri, 16 Aug 2024 17:22:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.5
	  Hostname:    ha-286000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 2010adee17654cf9b80256054061ea5a
	  System UUID:                ad96408c-0000-0000-89eb-d74e5b68d297
	  Boot ID:                    bef9467e-8834-4316-92a2-f595c590a856
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-dvmvk              0 (0%)        0 (0%)      0 (0%)           0 (0%)         25m
	  kube-system                 coredns-6f6b679f8f-2kqjf             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     28m
	  kube-system                 coredns-6f6b679f8f-rfbz7             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     28m
	  kube-system                 etcd-ha-286000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         28m
	  kube-system                 kindnet-whqxb                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      28m
	  kube-system                 kube-apiserver-ha-286000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-controller-manager-ha-286000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-proxy-w4nt2                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-scheduler-ha-286000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-vip-ha-286000                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m17s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m15s                  kube-proxy       
	  Normal  Starting                 28m                    kube-proxy       
	  Normal  NodeHasSufficientMemory  28m (x8 over 28m)      kubelet          Node ha-286000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  28m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     28m (x7 over 28m)      kubelet          Node ha-286000 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    28m (x8 over 28m)      kubelet          Node ha-286000 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 28m                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  28m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 28m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           28m                    node-controller  Node ha-286000 event: Registered Node ha-286000 in Controller
	  Normal  RegisteredNode           27m                    node-controller  Node ha-286000 event: Registered Node ha-286000 in Controller
	  Normal  RegisteredNode           9m58s                  node-controller  Node ha-286000 event: Registered Node ha-286000 in Controller
	  Normal  NodeNotReady             8m35s (x2 over 10m)    node-controller  Node ha-286000 status is now: NodeNotReady
	  Normal  NodeHasSufficientMemory  8m14s (x3 over 28m)    kubelet          Node ha-286000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m14s (x3 over 28m)    kubelet          Node ha-286000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m14s (x3 over 28m)    kubelet          Node ha-286000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                8m14s (x3 over 27m)    kubelet          Node ha-286000 status is now: NodeReady
	  Normal  Starting                 6m56s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m55s (x8 over 6m55s)  kubelet          Node ha-286000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m55s (x8 over 6m55s)  kubelet          Node ha-286000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m55s (x7 over 6m55s)  kubelet          Node ha-286000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m55s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m23s                  node-controller  Node ha-286000 event: Registered Node ha-286000 in Controller
	  Normal  RegisteredNode           6m3s                   node-controller  Node ha-286000 event: Registered Node ha-286000 in Controller
	
	
	Name:               ha-286000-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-286000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd
	                    minikube.k8s.io/name=ha-286000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_16T10_03_22_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 17:03:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-286000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 17:30:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 17:29:28 +0000   Fri, 16 Aug 2024 17:22:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 17:29:28 +0000   Fri, 16 Aug 2024 17:22:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 17:29:28 +0000   Fri, 16 Aug 2024 17:22:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 17:29:28 +0000   Fri, 16 Aug 2024 17:22:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.6
	  Hostname:    ha-286000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 ee275d4bd6234ce08a6c7d60b8d19b43
	  System UUID:                f7634935-0000-0000-a751-ac72999da031
	  Boot ID:                    035257b9-18e7-4adc-8e61-b35126468d96
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-k9m92                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         25m
	  kube-system                 etcd-ha-286000-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         27m
	  kube-system                 kindnet-t9kjf                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      27m
	  kube-system                 kube-apiserver-ha-286000-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 kube-controller-manager-ha-286000-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 kube-proxy-pt669                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 kube-scheduler-ha-286000-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 kube-vip-ha-286000-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m9s                   kube-proxy       
	  Normal  Starting                 9m43s                  kube-proxy       
	  Normal  Starting                 27m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  27m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  27m (x8 over 27m)      kubelet          Node ha-286000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27m (x8 over 27m)      kubelet          Node ha-286000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27m (x7 over 27m)      kubelet          Node ha-286000-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           27m                    node-controller  Node ha-286000-m02 event: Registered Node ha-286000-m02 in Controller
	  Normal  RegisteredNode           27m                    node-controller  Node ha-286000-m02 event: Registered Node ha-286000-m02 in Controller
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)      kubelet          Node ha-286000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)      kubelet          Node ha-286000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)      kubelet          Node ha-286000-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m58s                  node-controller  Node ha-286000-m02 event: Registered Node ha-286000-m02 in Controller
	  Normal  NodeNotReady             8m40s                  node-controller  Node ha-286000-m02 status is now: NodeNotReady
	  Normal  Starting                 6m36s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m36s (x8 over 6m36s)  kubelet          Node ha-286000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m36s (x8 over 6m36s)  kubelet          Node ha-286000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m36s (x7 over 6m36s)  kubelet          Node ha-286000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m36s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m23s                  node-controller  Node ha-286000-m02 event: Registered Node ha-286000-m02 in Controller
	  Normal  RegisteredNode           6m3s                   node-controller  Node ha-286000-m02 event: Registered Node ha-286000-m02 in Controller
	
	
	Name:               ha-286000-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-286000-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd
	                    minikube.k8s.io/name=ha-286000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_16T10_17_22_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 17:17:21 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-286000-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 17:22:50 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 16 Aug 2024 17:22:27 +0000   Fri, 16 Aug 2024 17:25:02 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 16 Aug 2024 17:22:27 +0000   Fri, 16 Aug 2024 17:25:02 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 16 Aug 2024 17:22:27 +0000   Fri, 16 Aug 2024 17:25:02 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 16 Aug 2024 17:22:27 +0000   Fri, 16 Aug 2024 17:25:02 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.169.0.8
	  Hostname:    ha-286000-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 324c7ca05f77443abc4e861a3d5a5224
	  System UUID:                9a6645c6-0000-0000-8cbd-49b6a6a0383b
	  Boot ID:                    839ab079-775d-4939-ac8e-9fb255ba29df
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-99xmp    0 (0%)        0 (0%)      0 (0%)           0 (0%)         25m
	  kube-system                 kindnet-b9r6s              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      13m
	  kube-system                 kube-proxy-5qhgk           0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 13m                  kube-proxy       
	  Normal  NodeAllocatableEnforced  13m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                  node-controller  Node ha-286000-m04 event: Registered Node ha-286000-m04 in Controller
	  Normal  RegisteredNode           13m                  node-controller  Node ha-286000-m04 event: Registered Node ha-286000-m04 in Controller
	  Normal  RegisteredNode           9m58s                node-controller  Node ha-286000-m04 event: Registered Node ha-286000-m04 in Controller
	  Normal  NodeNotReady             8m40s (x2 over 10m)  node-controller  Node ha-286000-m04 status is now: NodeNotReady
	  Normal  NodeHasSufficientPID     8m18s (x4 over 13m)  kubelet          Node ha-286000-m04 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  8m18s (x4 over 13m)  kubelet          Node ha-286000-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m18s (x4 over 13m)  kubelet          Node ha-286000-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeReady                8m18s (x3 over 13m)  kubelet          Node ha-286000-m04 status is now: NodeReady
	  Normal  RegisteredNode           6m23s                node-controller  Node ha-286000-m04 event: Registered Node ha-286000-m04 in Controller
	  Normal  RegisteredNode           6m3s                 node-controller  Node ha-286000-m04 event: Registered Node ha-286000-m04 in Controller
	  Normal  NodeNotReady             5m43s                node-controller  Node ha-286000-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.035803] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20200925/tbprint-173)
	[  +0.008121] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	[  +5.699152] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000002] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.007082] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.882621] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.230843] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.349832] systemd-fstab-generator[472]: Ignoring "noauto" option for root device
	[  +0.095939] systemd-fstab-generator[484]: Ignoring "noauto" option for root device
	[  +2.008291] systemd-fstab-generator[1111]: Ignoring "noauto" option for root device
	[  +0.258306] systemd-fstab-generator[1147]: Ignoring "noauto" option for root device
	[  +0.099664] systemd-fstab-generator[1159]: Ignoring "noauto" option for root device
	[  +0.061191] kauditd_printk_skb: 123 callbacks suppressed
	[  +0.060084] systemd-fstab-generator[1173]: Ignoring "noauto" option for root device
	[  +2.467356] systemd-fstab-generator[1389]: Ignoring "noauto" option for root device
	[  +0.100054] systemd-fstab-generator[1401]: Ignoring "noauto" option for root device
	[  +0.107009] systemd-fstab-generator[1413]: Ignoring "noauto" option for root device
	[  +0.132145] systemd-fstab-generator[1428]: Ignoring "noauto" option for root device
	[  +0.458193] systemd-fstab-generator[1593]: Ignoring "noauto" option for root device
	[  +6.918226] kauditd_printk_skb: 190 callbacks suppressed
	[Aug16 17:24] kauditd_printk_skb: 40 callbacks suppressed
	[ +21.525016] kauditd_printk_skb: 82 callbacks suppressed
	
	
	==> etcd [77cac41fb9bd] <==
	{"level":"info","ts":"2024-08-16T17:24:16.876969Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"9633c02797b6d34"}
	{"level":"info","ts":"2024-08-16T17:24:16.894983Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"9633c02797b6d34","stream-type":"stream Message"}
	{"level":"info","ts":"2024-08-16T17:24:16.895123Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"9633c02797b6d34"}
	{"level":"info","ts":"2024-08-16T17:24:16.966205Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"9633c02797b6d34","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-08-16T17:24:16.966225Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"9633c02797b6d34"}
	{"level":"warn","ts":"2024-08-16T17:24:17.366286Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740435865607170,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-08-16T17:24:17.388375Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"9633c02797b6d34","rtt":"0s","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-16T17:24:17.389606Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"9633c02797b6d34","rtt":"0s","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"info","ts":"2024-08-16T17:24:17.664302Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 3, index: 4621, vote: b8c6c7563d17d844] cast MsgPreVote for 9633c02797b6d34 [logterm: 3, index: 4621] at term 3"}
	{"level":"info","ts":"2024-08-16T17:24:17.667262Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [term: 3] received a MsgVote message with higher term from 9633c02797b6d34 [term: 4]"}
	{"level":"info","ts":"2024-08-16T17:24:17.667420Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became follower at term 4"}
	{"level":"info","ts":"2024-08-16T17:24:17.667474Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 3, index: 4621, vote: 0] cast MsgVote for 9633c02797b6d34 [logterm: 3, index: 4621] at term 4"}
	{"level":"info","ts":"2024-08-16T17:24:17.668493Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b8c6c7563d17d844 elected leader 9633c02797b6d34 at term 4"}
	{"level":"warn","ts":"2024-08-16T17:24:17.668980Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"3.305918651s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: leader changed"}
	{"level":"info","ts":"2024-08-16T17:24:17.669025Z","caller":"traceutil/trace.go:171","msg":"trace[958839912] range","detail":"{range_begin:; range_end:; }","duration":"3.306272649s","start":"2024-08-16T17:24:14.362747Z","end":"2024-08-16T17:24:17.669020Z","steps":["trace[958839912] 'agreement among raft nodes before linearized reading'  (duration: 3.305917726s)"],"step_count":1}
	{"level":"error","ts":"2024-08-16T17:24:17.669050Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[+]serializable_read ok\n[-]linearizable_read failed: etcdserver: leader changed\n[+]data_corruption ok\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	{"level":"info","ts":"2024-08-16T17:24:17.672550Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"b8c6c7563d17d844","local-member-attributes":"{Name:ha-286000 ClientURLs:[https://192.169.0.5:2379]}","request-path":"/0/members/b8c6c7563d17d844/attributes","cluster-id":"b73189effde9bc63","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-16T17:24:17.672690Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-16T17:24:17.673076Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-16T17:24:17.673114Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-16T17:24:17.672747Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-16T17:24:17.675839Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-16T17:24:17.676355Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-16T17:24:17.676582Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.5:2379"}
	{"level":"info","ts":"2024-08-16T17:24:17.677166Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [d8dadff6cec7] <==
	{"level":"info","ts":"2024-08-16T17:23:23.603134Z","caller":"traceutil/trace.go:171","msg":"trace[1695899457] range","detail":"{range_begin:/registry/persistentvolumeclaims/; range_end:/registry/persistentvolumeclaims0; }","duration":"7.280387387s","start":"2024-08-16T17:23:16.322744Z","end":"2024-08-16T17:23:23.603132Z","steps":["trace[1695899457] 'agreement among raft nodes before linearized reading'  (duration: 7.280377262s)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T17:23:23.603145Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T17:23:16.322710Z","time spent":"7.280431347s","remote":"127.0.0.1:56178","response type":"/etcdserverpb.KV/Range","request count":0,"request size":72,"response count":0,"response size":0,"request content":"key:\"/registry/persistentvolumeclaims/\" range_end:\"/registry/persistentvolumeclaims0\" count_only:true "}
	2024/08/16 17:23:23 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-16T17:23:23.603197Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"3.204231928s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/validatingadmissionpolicies/\" range_end:\"/registry/validatingadmissionpolicies0\" count_only:true ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-08-16T17:23:23.603208Z","caller":"traceutil/trace.go:171","msg":"trace[821531247] range","detail":"{range_begin:/registry/validatingadmissionpolicies/; range_end:/registry/validatingadmissionpolicies0; }","duration":"3.204245539s","start":"2024-08-16T17:23:20.398959Z","end":"2024-08-16T17:23:23.603205Z","steps":["trace[821531247] 'agreement among raft nodes before linearized reading'  (duration: 3.204231749s)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T17:23:23.603218Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T17:23:20.398944Z","time spent":"3.204271101s","remote":"127.0.0.1:56532","response type":"/etcdserverpb.KV/Range","request count":0,"request size":82,"response count":0,"response size":0,"request content":"key:\"/registry/validatingadmissionpolicies/\" range_end:\"/registry/validatingadmissionpolicies0\" count_only:true "}
	2024/08/16 17:23:23 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-16T17:23:23.604807Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740435533448225,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-08-16T17:23:23.605017Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 3"}
	{"level":"info","ts":"2024-08-16T17:23:23.605028Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-08-16T17:23:23.605034Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 3"}
	{"level":"info","ts":"2024-08-16T17:23:23.605042Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 3, index: 4621] sent MsgPreVote request to 9633c02797b6d34 at term 3"}
	{"level":"warn","ts":"2024-08-16T17:23:23.646548Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-16T17:23:23.646617Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-16T17:23:23.646652Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"b8c6c7563d17d844","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-08-16T17:23:23.647836Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"9633c02797b6d34"}
	{"level":"info","ts":"2024-08-16T17:23:23.647877Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"9633c02797b6d34"}
	{"level":"info","ts":"2024-08-16T17:23:23.647896Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"9633c02797b6d34"}
	{"level":"info","ts":"2024-08-16T17:23:23.648043Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"9633c02797b6d34"}
	{"level":"info","ts":"2024-08-16T17:23:23.648105Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"9633c02797b6d34"}
	{"level":"info","ts":"2024-08-16T17:23:23.648130Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"9633c02797b6d34"}
	{"level":"info","ts":"2024-08-16T17:23:23.648158Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"9633c02797b6d34"}
	{"level":"info","ts":"2024-08-16T17:23:23.650448Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-08-16T17:23:23.650508Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-08-16T17:23:23.650516Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-286000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.5:2380"],"advertise-client-urls":["https://192.169.0.5:2379"]}
	
	
	==> kernel <==
	 17:30:45 up 7 min,  0 users,  load average: 0.21, 0.11, 0.04
	Linux ha-286000 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [2677394dcd66] <==
	I0816 17:22:35.224951       1 main.go:322] Node ha-286000-m04 has CIDR [10.244.2.0/24] 
	I0816 17:22:45.231619       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0816 17:22:45.231806       1 main.go:299] handling current node
	I0816 17:22:45.231910       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0816 17:22:45.231994       1 main.go:322] Node ha-286000-m02 has CIDR [10.244.1.0/24] 
	I0816 17:22:45.232158       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0816 17:22:45.232263       1 main.go:322] Node ha-286000-m04 has CIDR [10.244.2.0/24] 
	I0816 17:22:55.225733       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0816 17:22:55.225894       1 main.go:299] handling current node
	I0816 17:22:55.225954       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0816 17:22:55.226004       1 main.go:322] Node ha-286000-m02 has CIDR [10.244.1.0/24] 
	I0816 17:22:55.226143       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0816 17:22:55.226223       1 main.go:322] Node ha-286000-m04 has CIDR [10.244.2.0/24] 
	I0816 17:23:05.224175       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0816 17:23:05.224416       1 main.go:299] handling current node
	I0816 17:23:05.224540       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0816 17:23:05.224830       1 main.go:322] Node ha-286000-m02 has CIDR [10.244.1.0/24] 
	I0816 17:23:05.225112       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0816 17:23:05.225305       1 main.go:322] Node ha-286000-m04 has CIDR [10.244.2.0/24] 
	I0816 17:23:15.226037       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0816 17:23:15.226204       1 main.go:299] handling current node
	I0816 17:23:15.226257       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0816 17:23:15.226357       1 main.go:322] Node ha-286000-m02 has CIDR [10.244.1.0/24] 
	I0816 17:23:15.226471       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0816 17:23:15.226617       1 main.go:322] Node ha-286000-m04 has CIDR [10.244.2.0/24] 
	
	
	==> kindnet [f9023c4cc7d0] <==
	I0816 17:30:00.455762       1 main.go:322] Node ha-286000-m04 has CIDR [10.244.2.0/24] 
	I0816 17:30:10.454768       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0816 17:30:10.454806       1 main.go:299] handling current node
	I0816 17:30:10.454817       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0816 17:30:10.454822       1 main.go:322] Node ha-286000-m02 has CIDR [10.244.1.0/24] 
	I0816 17:30:10.454887       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0816 17:30:10.454940       1 main.go:322] Node ha-286000-m04 has CIDR [10.244.2.0/24] 
	I0816 17:30:20.455049       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0816 17:30:20.455143       1 main.go:299] handling current node
	I0816 17:30:20.455194       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0816 17:30:20.455218       1 main.go:322] Node ha-286000-m02 has CIDR [10.244.1.0/24] 
	I0816 17:30:20.455455       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0816 17:30:20.455517       1 main.go:322] Node ha-286000-m04 has CIDR [10.244.2.0/24] 
	I0816 17:30:30.455006       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0816 17:30:30.455047       1 main.go:299] handling current node
	I0816 17:30:30.455058       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0816 17:30:30.455063       1 main.go:322] Node ha-286000-m02 has CIDR [10.244.1.0/24] 
	I0816 17:30:30.455404       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0816 17:30:30.455457       1 main.go:322] Node ha-286000-m04 has CIDR [10.244.2.0/24] 
	I0816 17:30:40.455228       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0816 17:30:40.455455       1 main.go:322] Node ha-286000-m04 has CIDR [10.244.2.0/24] 
	I0816 17:30:40.455587       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0816 17:30:40.455616       1 main.go:299] handling current node
	I0816 17:30:40.455625       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0816 17:30:40.455630       1 main.go:322] Node ha-286000-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [63b366c951f2] <==
	W0816 17:23:23.632159       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:23:23.632202       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:23:23.632235       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:23:23.632264       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:23:23.632290       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:23:23.632317       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:23:23.632341       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:23:23.632369       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:23:23.632396       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:23:23.632421       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:23:23.632450       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:23:23.632476       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:23:23.632504       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:23:23.632531       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:23:23.632590       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0816 17:23:23.633069       1 controller.go:195] "Failed to update lease" err="rpc error: code = Unknown desc = malformed header: missing HTTP content-type"
	W0816 17:23:23.633101       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:23:23.633121       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:23:23.633137       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:23:23.633159       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:23:23.633175       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:23:23.633190       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:23:23.633205       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:23:23.633221       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:23:23.633236       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [64b3c5f995d8] <==
	I0816 17:24:18.567658       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0816 17:24:18.568203       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0816 17:24:18.568352       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0816 17:24:18.635954       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0816 17:24:18.636020       1 policy_source.go:224] refreshing policies
	I0816 17:24:18.661089       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0816 17:24:18.661333       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0816 17:24:18.665098       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0816 17:24:18.665805       1 shared_informer.go:320] Caches are synced for configmaps
	I0816 17:24:18.666159       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0816 17:24:18.666396       1 aggregator.go:171] initial CRD sync complete...
	I0816 17:24:18.666573       1 autoregister_controller.go:144] Starting autoregister controller
	I0816 17:24:18.669371       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0816 17:24:18.669507       1 cache.go:39] Caches are synced for autoregister controller
	I0816 17:24:18.669649       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0816 17:24:18.673264       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0816 17:24:18.673925       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0816 17:24:18.676414       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0816 17:24:18.681474       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	E0816 17:24:18.693871       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0816 17:24:18.734462       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0816 17:24:19.567976       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0816 17:24:19.905347       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.5 192.169.0.6]
	I0816 17:24:19.907243       1 controller.go:615] quota admission added evaluator for: endpoints
	I0816 17:24:19.913024       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [257f5b412fe2] <==
	I0816 17:23:57.992802       1 serving.go:386] Generated self-signed cert in-memory
	I0816 17:23:58.299343       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0816 17:23:58.299552       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 17:23:58.302121       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0816 17:23:58.302479       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0816 17:23:58.302580       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0816 17:23:58.303517       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0816 17:24:18.587870       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: forbidden: User \"system:kube-controller-manager\" cannot get path \"/healthz\""
	
	
	==> kube-controller-manager [88937b4d9b3f] <==
	I0816 17:24:42.951528       1 shared_informer.go:320] Caches are synced for garbage collector
	I0816 17:24:42.974213       1 shared_informer.go:320] Caches are synced for garbage collector
	I0816 17:24:42.974443       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0816 17:25:02.082814       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000-m04"
	I0816 17:25:02.095196       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000-m04"
	I0816 17:25:02.127968       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="39.027587ms"
	I0816 17:25:02.128030       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="24.447µs"
	I0816 17:25:02.643392       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000-m04"
	I0816 17:25:07.139420       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000-m04"
	I0816 17:25:08.371423       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-mqfxs EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-mqfxs\": the object has been modified; please apply your changes to the latest version and try again"
	I0816 17:25:08.371686       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"fd9fe61f-ffc6-4f61-848a-91dfce599e44", APIVersion:"v1", ResourceVersion:"301", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-mqfxs EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-mqfxs": the object has been modified; please apply your changes to the latest version and try again
	I0816 17:25:08.374007       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="29.442371ms"
	I0816 17:25:08.393173       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-mqfxs EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-mqfxs\": the object has been modified; please apply your changes to the latest version and try again"
	I0816 17:25:08.393688       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"fd9fe61f-ffc6-4f61-848a-91dfce599e44", APIVersion:"v1", ResourceVersion:"301", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-mqfxs EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-mqfxs": the object has been modified; please apply your changes to the latest version and try again
	I0816 17:25:08.408690       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="34.610733ms"
	I0816 17:25:08.408944       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="211.164µs"
	I0816 17:29:26.116788       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000"
	I0816 17:29:28.983013       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000-m02"
	I0816 17:30:02.644115       1 taint_eviction.go:111] "Deleting pod" logger="taint-eviction-controller" controller="taint-eviction-controller" pod="default/busybox-7dff88458-99xmp"
	I0816 17:30:02.656658       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="30.99µs"
	I0816 17:30:02.708557       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="47.170668ms"
	I0816 17:30:02.730962       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="22.359755ms"
	E0816 17:30:02.731009       1 replica_set.go:560] "Unhandled Error" err="sync \"default/busybox-7dff88458\" failed with Operation cannot be fulfilled on replicasets.apps \"busybox-7dff88458\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0816 17:30:02.732470       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="44.954µs"
	I0816 17:30:02.738153       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="43.511µs"
	
	
	==> kube-proxy [60feb425249e] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0816 17:24:29.419881       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0816 17:24:29.442807       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E0816 17:24:29.442895       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0816 17:24:29.500213       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0816 17:24:29.500259       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0816 17:24:29.500279       1 server_linux.go:169] "Using iptables Proxier"
	I0816 17:24:29.504235       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0816 17:24:29.504982       1 server.go:483] "Version info" version="v1.31.0"
	I0816 17:24:29.505010       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 17:24:29.508282       1 config.go:197] "Starting service config controller"
	I0816 17:24:29.508363       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0816 17:24:29.508991       1 config.go:326] "Starting node config controller"
	I0816 17:24:29.509044       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0816 17:24:29.510479       1 config.go:104] "Starting endpoint slice config controller"
	I0816 17:24:29.510508       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0816 17:24:29.609193       1 shared_informer.go:320] Caches are synced for node config
	I0816 17:24:29.609332       1 shared_informer.go:320] Caches are synced for service config
	I0816 17:24:29.610541       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [81f6c96d4649] <==
	E0816 17:18:57.696982       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2592\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 17:19:00.770881       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-286000&resourceVersion=2600": dial tcp 192.169.0.254:8443: connect: no route to host
	E0816 17:19:00.770973       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-286000&resourceVersion=2600\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 17:19:00.771455       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2678": dial tcp 192.169.0.254:8443: connect: no route to host
	E0816 17:19:00.771540       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2678\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 17:19:03.838026       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2592": dial tcp 192.169.0.254:8443: connect: no route to host
	E0816 17:19:03.838287       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2592\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 17:19:09.980567       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-286000&resourceVersion=2600": dial tcp 192.169.0.254:8443: connect: no route to host
	E0816 17:19:09.980625       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-286000&resourceVersion=2600\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 17:19:13.053000       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2592": dial tcp 192.169.0.254:8443: connect: no route to host
	E0816 17:19:13.053145       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2592\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 17:19:16.125305       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2678": dial tcp 192.169.0.254:8443: connect: no route to host
	E0816 17:19:16.125738       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2678\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 17:19:28.413017       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-286000&resourceVersion=2600": dial tcp 192.169.0.254:8443: connect: no route to host
	E0816 17:19:28.413242       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-286000&resourceVersion=2600\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 17:19:37.633251       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2592": dial tcp 192.169.0.254:8443: connect: no route to host
	E0816 17:19:37.633353       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2592\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 17:19:37.633417       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2678": dial tcp 192.169.0.254:8443: connect: no route to host
	E0816 17:19:37.633437       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2678\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 17:19:56.059814       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-286000&resourceVersion=2600": dial tcp 192.169.0.254:8443: connect: no route to host
	E0816 17:19:56.059845       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-286000&resourceVersion=2600\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 17:20:17.564736       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2592": dial tcp 192.169.0.254:8443: connect: no route to host
	E0816 17:20:17.564831       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2592\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 17:20:17.565065       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2678": dial tcp 192.169.0.254:8443: connect: no route to host
	E0816 17:20:17.565112       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2678\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-scheduler [bcd696090d54] <==
	I0816 17:23:57.780845       1 serving.go:386] Generated self-signed cert in-memory
	W0816 17:24:08.860542       1 authentication.go:370] Error looking up in-cluster authentication configuration: Get "https://192.169.0.5:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": net/http: TLS handshake timeout
	W0816 17:24:08.860585       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0816 17:24:08.860591       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0816 17:24:18.591414       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0816 17:24:18.591456       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 17:24:18.606860       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0816 17:24:18.608591       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0816 17:24:18.608692       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0816 17:24:18.609554       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0816 17:24:18.708922       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [f7b2e9efdd94] <==
	W0816 17:20:20.013337       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0816 17:20:20.013508       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0816 17:20:22.503962       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0816 17:20:22.504039       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 17:20:23.117539       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0816 17:20:23.117759       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0816 17:20:24.619908       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0816 17:20:24.620160       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 17:20:32.932878       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0816 17:20:32.932925       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 17:20:34.100467       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0816 17:20:34.100511       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 17:20:36.209664       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0816 17:20:36.209784       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 17:20:36.615553       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0816 17:20:36.615615       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 17:20:37.131529       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0816 17:20:37.131621       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 17:20:37.319247       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 17:20:37.319312       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 17:20:39.232294       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0816 17:20:39.232326       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0816 17:21:33.466903       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.169.0.5:8443/apis/policy/v1/poddisruptionbudgets?resourceVersion=2660": dial tcp 192.169.0.5:8443: connect: connection refused
	E0816 17:21:33.467202       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.169.0.5:8443/apis/policy/v1/poddisruptionbudgets?resourceVersion=2660\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	E0816 17:23:23.612582       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Aug 16 17:25:50 ha-286000 kubelet[1600]: E0816 17:25:50.045814    1600 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 16 17:25:50 ha-286000 kubelet[1600]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 16 17:25:50 ha-286000 kubelet[1600]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 16 17:25:50 ha-286000 kubelet[1600]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 16 17:25:50 ha-286000 kubelet[1600]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 16 17:26:50 ha-286000 kubelet[1600]: E0816 17:26:50.046088    1600 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 16 17:26:50 ha-286000 kubelet[1600]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 16 17:26:50 ha-286000 kubelet[1600]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 16 17:26:50 ha-286000 kubelet[1600]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 16 17:26:50 ha-286000 kubelet[1600]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 16 17:27:50 ha-286000 kubelet[1600]: E0816 17:27:50.045671    1600 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 16 17:27:50 ha-286000 kubelet[1600]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 16 17:27:50 ha-286000 kubelet[1600]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 16 17:27:50 ha-286000 kubelet[1600]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 16 17:27:50 ha-286000 kubelet[1600]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 16 17:28:50 ha-286000 kubelet[1600]: E0816 17:28:50.046455    1600 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 16 17:28:50 ha-286000 kubelet[1600]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 16 17:28:50 ha-286000 kubelet[1600]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 16 17:28:50 ha-286000 kubelet[1600]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 16 17:28:50 ha-286000 kubelet[1600]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 16 17:29:50 ha-286000 kubelet[1600]: E0816 17:29:50.046582    1600 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 16 17:29:50 ha-286000 kubelet[1600]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 16 17:29:50 ha-286000 kubelet[1600]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 16 17:29:50 ha-286000 kubelet[1600]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 16 17:29:50 ha-286000 kubelet[1600]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-286000 -n ha-286000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-286000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-7dff88458-pcqtw
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-286000 describe pod busybox-7dff88458-pcqtw
helpers_test.go:282: (dbg) kubectl --context ha-286000 describe pod busybox-7dff88458-pcqtw:

                                                
                                                
-- stdout --
	Name:             busybox-7dff88458-pcqtw
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7dff88458
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7dff88458
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9ff7z (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-9ff7z:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age   From               Message
	  ----     ------            ----  ----               -------
	  Warning  FailedScheduling  45s   default-scheduler  0/3 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  45s   default-scheduler  0/3 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (96.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (64.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-amd64 -p ha-286000 stop -v=7 --alsologtostderr
E0816 10:31:32.713953    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/addons-725000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-286000 stop -v=7 --alsologtostderr: signal: killed (59.937496841s)

                                                
                                                
-- stdout --
	* Stopping node "ha-286000-m04"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 10:30:48.129768    6030 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:30:48.130061    6030 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:30:48.130067    6030 out.go:358] Setting ErrFile to fd 2...
	I0816 10:30:48.130070    6030 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:30:48.130249    6030 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1276/.minikube/bin
	I0816 10:30:48.130578    6030 out.go:352] Setting JSON to false
	I0816 10:30:48.130736    6030 mustload.go:65] Loading cluster: ha-286000
	I0816 10:30:48.131078    6030 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:30:48.131172    6030 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:30:48.131550    6030 mustload.go:65] Loading cluster: ha-286000
	I0816 10:30:48.131703    6030 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:30:48.131738    6030 stop.go:39] StopHost: ha-286000-m04
	I0816 10:30:48.132131    6030 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:30:48.132177    6030 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:30:48.140664    6030 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52374
	I0816 10:30:48.141150    6030 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:30:48.141623    6030 main.go:141] libmachine: Using API Version  1
	I0816 10:30:48.141645    6030 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:30:48.141901    6030 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:30:48.179906    6030 out.go:177] * Stopping node "ha-286000-m04"  ...
	I0816 10:30:48.221812    6030 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0816 10:30:48.221843    6030 main.go:141] libmachine: (ha-286000-m04) Calling .DriverName
	I0816 10:30:48.222067    6030 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0816 10:30:48.222091    6030 main.go:141] libmachine: (ha-286000-m04) Calling .GetSSHHostname
	I0816 10:30:48.222198    6030 main.go:141] libmachine: (ha-286000-m04) Calling .GetSSHPort
	I0816 10:30:48.222304    6030 main.go:141] libmachine: (ha-286000-m04) Calling .GetSSHKeyPath
	I0816 10:30:48.222405    6030 main.go:141] libmachine: (ha-286000-m04) Calling .GetSSHUsername
	I0816 10:30:48.222513    6030 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m04/id_rsa Username:docker}

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-darwin-amd64 -p ha-286000 stop -v=7 --alsologtostderr": signal: killed
ha_test.go:537: (dbg) Run:  out/minikube-darwin-amd64 -p ha-286000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-286000 status -v=7 --alsologtostderr: context deadline exceeded (1.399µs)
ha_test.go:540: failed to run minikube status. args "out/minikube-darwin-amd64 -p ha-286000 status -v=7 --alsologtostderr" : context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-286000 -n ha-286000
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-286000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-286000 logs -n 25: (3.456688099s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| kubectl | -p ha-286000 -- get pods -o          | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT |                     |
	|         | busybox-7dff88458-99xmp --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-dvmvk --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-k9m92 --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT |                     |
	|         | busybox-7dff88458-99xmp --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-dvmvk --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-k9m92 --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT |                     |
	|         | busybox-7dff88458-99xmp -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-dvmvk -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-k9m92 -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- get pods -o          | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT |                     |
	|         | busybox-7dff88458-99xmp              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-dvmvk              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-dvmvk -- sh        |           |         |         |                     |                     |
	|         | -c ping -c 1 192.169.0.1             |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-k9m92              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-286000 -- exec                 | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:16 PDT | 16 Aug 24 10:16 PDT |
	|         | busybox-7dff88458-k9m92 -- sh        |           |         |         |                     |                     |
	|         | -c ping -c 1 192.169.0.1             |           |         |         |                     |                     |
	| node    | add -p ha-286000 -v=7                | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:17 PDT | 16 Aug 24 10:17 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | ha-286000 node stop m02 -v=7         | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:17 PDT | 16 Aug 24 10:18 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | ha-286000 node start m02 -v=7        | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:20 PDT | 16 Aug 24 10:22 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | list -p ha-286000 -v=7               | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:22 PDT |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| stop    | -p ha-286000 -v=7                    | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:22 PDT | 16 Aug 24 10:23 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| start   | -p ha-286000 --wait=true -v=7        | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:23 PDT |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | list -p ha-286000                    | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:29 PDT |                     |
	| node    | ha-286000 node delete m03 -v=7       | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:29 PDT |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| stop    | ha-286000 stop -v=7                  | ha-286000 | jenkins | v1.33.1 | 16 Aug 24 10:30 PDT |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 10:23:31
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 10:23:31.430615    4656 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:23:31.431053    4656 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:23:31.431060    4656 out.go:358] Setting ErrFile to fd 2...
	I0816 10:23:31.431065    4656 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:23:31.431301    4656 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1276/.minikube/bin
	I0816 10:23:31.432961    4656 out.go:352] Setting JSON to false
	I0816 10:23:31.457337    4656 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":3181,"bootTime":1723825830,"procs":437,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0816 10:23:31.457435    4656 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 10:23:31.479716    4656 out.go:177] * [ha-286000] minikube v1.33.1 on Darwin 14.6.1
	I0816 10:23:31.522521    4656 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 10:23:31.522577    4656 notify.go:220] Checking for updates...
	I0816 10:23:31.567096    4656 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1276/kubeconfig
	I0816 10:23:31.588384    4656 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0816 10:23:31.609442    4656 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 10:23:31.630204    4656 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1276/.minikube
	I0816 10:23:31.651227    4656 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 10:23:31.673167    4656 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:23:31.673335    4656 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 10:23:31.674026    4656 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:23:31.674118    4656 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:23:31.683709    4656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52161
	I0816 10:23:31.684063    4656 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:23:31.684452    4656 main.go:141] libmachine: Using API Version  1
	I0816 10:23:31.684463    4656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:23:31.684744    4656 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:23:31.684873    4656 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:23:31.714156    4656 out.go:177] * Using the hyperkit driver based on existing profile
	I0816 10:23:31.756393    4656 start.go:297] selected driver: hyperkit
	I0816 10:23:31.756421    4656 start.go:901] validating driver "hyperkit" against &{Name:ha-286000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:ha-286000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:fals
e efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 10:23:31.756672    4656 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 10:23:31.756879    4656 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:23:31.757097    4656 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19461-1276/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0816 10:23:31.766849    4656 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0816 10:23:31.772699    4656 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:23:31.772722    4656 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0816 10:23:31.776315    4656 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 10:23:31.776385    4656 cni.go:84] Creating CNI manager for ""
	I0816 10:23:31.776395    4656 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0816 10:23:31.776475    4656 start.go:340] cluster config:
	{Name:ha-286000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-286000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-t
iller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 10:23:31.776573    4656 iso.go:125] acquiring lock: {Name:mkb430b43b5e7a8a683454482519837ed996c78d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:23:31.798308    4656 out.go:177] * Starting "ha-286000" primary control-plane node in "ha-286000" cluster
	I0816 10:23:31.820262    4656 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 10:23:31.820333    4656 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0816 10:23:31.820361    4656 cache.go:56] Caching tarball of preloaded images
	I0816 10:23:31.820552    4656 preload.go:172] Found /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0816 10:23:31.820569    4656 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 10:23:31.820757    4656 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:23:31.821672    4656 start.go:360] acquireMachinesLock for ha-286000: {Name:mk0510f0432b1e24fd07d899155e85dff8756d5a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:23:31.821789    4656 start.go:364] duration metric: took 93.411µs to acquireMachinesLock for "ha-286000"
	I0816 10:23:31.821826    4656 start.go:96] Skipping create...Using existing machine configuration
	I0816 10:23:31.821843    4656 fix.go:54] fixHost starting: 
	I0816 10:23:31.822296    4656 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:23:31.822326    4656 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:23:31.831598    4656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52163
	I0816 10:23:31.831979    4656 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:23:31.832360    4656 main.go:141] libmachine: Using API Version  1
	I0816 10:23:31.832373    4656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:23:31.832622    4656 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:23:31.832766    4656 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:23:31.832876    4656 main.go:141] libmachine: (ha-286000) Calling .GetState
	I0816 10:23:31.832983    4656 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:23:31.833087    4656 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 3771
	I0816 10:23:31.834009    4656 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid 3771 missing from process table
	I0816 10:23:31.834044    4656 fix.go:112] recreateIfNeeded on ha-286000: state=Stopped err=<nil>
	I0816 10:23:31.834061    4656 main.go:141] libmachine: (ha-286000) Calling .DriverName
	W0816 10:23:31.834156    4656 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 10:23:31.892140    4656 out.go:177] * Restarting existing hyperkit VM for "ha-286000" ...
	I0816 10:23:31.931475    4656 main.go:141] libmachine: (ha-286000) Calling .Start
	I0816 10:23:31.931796    4656 main.go:141] libmachine: (ha-286000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/hyperkit.pid
	I0816 10:23:31.931814    4656 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:23:31.933360    4656 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid 3771 missing from process table
	I0816 10:23:31.933379    4656 main.go:141] libmachine: (ha-286000) DBG | pid 3771 is in state "Stopped"
	I0816 10:23:31.933400    4656 main.go:141] libmachine: (ha-286000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/hyperkit.pid...
	I0816 10:23:31.934010    4656 main.go:141] libmachine: (ha-286000) DBG | Using UUID ad96de67-e238-408c-89eb-d74e5b68d297
	I0816 10:23:32.043909    4656 main.go:141] libmachine: (ha-286000) DBG | Generated MAC 66:c8:48:4e:12:1b
	I0816 10:23:32.043928    4656 main.go:141] libmachine: (ha-286000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000
	I0816 10:23:32.044052    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:32 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"ad96de67-e238-408c-89eb-d74e5b68d297", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003a89c0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 10:23:32.044084    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:32 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"ad96de67-e238-408c-89eb-d74e5b68d297", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003a89c0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 10:23:32.044134    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:32 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "ad96de67-e238-408c-89eb-d74e5b68d297", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/ha-286000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/bzimage,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000"}
	I0816 10:23:32.044180    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:32 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U ad96de67-e238-408c-89eb-d74e5b68d297 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/ha-286000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/console-ring -f kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/bzimage,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000"
	I0816 10:23:32.044192    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:32 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0816 10:23:32.045646    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:32 DEBUG: hyperkit: Pid is 4669
	I0816 10:23:32.046030    4656 main.go:141] libmachine: (ha-286000) DBG | Attempt 0
	I0816 10:23:32.046046    4656 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:23:32.046146    4656 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 4669
	I0816 10:23:32.048140    4656 main.go:141] libmachine: (ha-286000) DBG | Searching for 66:c8:48:4e:12:1b in /var/db/dhcpd_leases ...
	I0816 10:23:32.048193    4656 main.go:141] libmachine: (ha-286000) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0816 10:23:32.048231    4656 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dbd7}
	I0816 10:23:32.048249    4656 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:23:32.048272    4656 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66c0d7f9}
	I0816 10:23:32.048286    4656 main.go:141] libmachine: (ha-286000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0d786}
	I0816 10:23:32.048293    4656 main.go:141] libmachine: (ha-286000) DBG | Found match: 66:c8:48:4e:12:1b
	I0816 10:23:32.048301    4656 main.go:141] libmachine: (ha-286000) DBG | IP: 192.169.0.5
	I0816 10:23:32.048382    4656 main.go:141] libmachine: (ha-286000) Calling .GetConfigRaw
	I0816 10:23:32.049597    4656 main.go:141] libmachine: (ha-286000) Calling .GetIP
	I0816 10:23:32.049816    4656 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:23:32.050246    4656 machine.go:93] provisionDockerMachine start ...
	I0816 10:23:32.050258    4656 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:23:32.050395    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:23:32.050512    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:23:32.050602    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:23:32.050694    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:23:32.050788    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:23:32.050933    4656 main.go:141] libmachine: Using SSH client type: native
	I0816 10:23:32.051148    4656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e87ea0] 0x3e8ac00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:23:32.051157    4656 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 10:23:32.053822    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:32 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0816 10:23:32.105618    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:32 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0816 10:23:32.106644    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:23:32.106664    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:23:32.106672    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:23:32.106681    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:23:32.488273    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:32 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0816 10:23:32.488286    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:32 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0816 10:23:32.602925    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:23:32.602945    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:23:32.602968    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:23:32.603003    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:23:32.603842    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:32 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0816 10:23:32.603853    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:32 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0816 10:23:38.196809    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:38 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0816 10:23:38.196887    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:38 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0816 10:23:38.196898    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:38 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0816 10:23:38.223115    4656 main.go:141] libmachine: (ha-286000) DBG | 2024/08/16 10:23:38 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0816 10:23:43.125906    4656 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 10:23:43.125920    4656 main.go:141] libmachine: (ha-286000) Calling .GetMachineName
	I0816 10:23:43.126080    4656 buildroot.go:166] provisioning hostname "ha-286000"
	I0816 10:23:43.126090    4656 main.go:141] libmachine: (ha-286000) Calling .GetMachineName
	I0816 10:23:43.126193    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:23:43.126289    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:23:43.126427    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:23:43.126532    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:23:43.126633    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:23:43.126763    4656 main.go:141] libmachine: Using SSH client type: native
	I0816 10:23:43.126897    4656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e87ea0] 0x3e8ac00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:23:43.126905    4656 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-286000 && echo "ha-286000" | sudo tee /etc/hostname
	I0816 10:23:43.200672    4656 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-286000
	
	I0816 10:23:43.200691    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:23:43.200824    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:23:43.200934    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:23:43.201035    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:23:43.201146    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:23:43.201266    4656 main.go:141] libmachine: Using SSH client type: native
	I0816 10:23:43.201423    4656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e87ea0] 0x3e8ac00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:23:43.201434    4656 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-286000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-286000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-286000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 10:23:43.272382    4656 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 10:23:43.272403    4656 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19461-1276/.minikube CaCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19461-1276/.minikube}
	I0816 10:23:43.272418    4656 buildroot.go:174] setting up certificates
	I0816 10:23:43.272432    4656 provision.go:84] configureAuth start
	I0816 10:23:43.272440    4656 main.go:141] libmachine: (ha-286000) Calling .GetMachineName
	I0816 10:23:43.272576    4656 main.go:141] libmachine: (ha-286000) Calling .GetIP
	I0816 10:23:43.272680    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:23:43.272769    4656 provision.go:143] copyHostCerts
	I0816 10:23:43.272801    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem
	I0816 10:23:43.272890    4656 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem, removing ...
	I0816 10:23:43.272898    4656 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem
	I0816 10:23:43.273149    4656 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem (1078 bytes)
	I0816 10:23:43.273406    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem
	I0816 10:23:43.273447    4656 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem, removing ...
	I0816 10:23:43.273452    4656 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem
	I0816 10:23:43.273542    4656 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem (1123 bytes)
	I0816 10:23:43.273700    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem
	I0816 10:23:43.273746    4656 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem, removing ...
	I0816 10:23:43.273751    4656 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem
	I0816 10:23:43.273833    4656 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem (1679 bytes)
	I0816 10:23:43.274002    4656 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem org=jenkins.ha-286000 san=[127.0.0.1 192.169.0.5 ha-286000 localhost minikube]
	I0816 10:23:43.350973    4656 provision.go:177] copyRemoteCerts
	I0816 10:23:43.351030    4656 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 10:23:43.351047    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:23:43.351198    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:23:43.351290    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:23:43.351418    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:23:43.351516    4656 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:23:43.390290    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0816 10:23:43.390367    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0816 10:23:43.409250    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0816 10:23:43.409310    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0816 10:23:43.428428    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0816 10:23:43.428486    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 10:23:43.447295    4656 provision.go:87] duration metric: took 174.931658ms to configureAuth
	I0816 10:23:43.447308    4656 buildroot.go:189] setting minikube options for container-runtime
	I0816 10:23:43.447492    4656 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:23:43.447506    4656 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:23:43.447636    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:23:43.447734    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:23:43.447819    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:23:43.447898    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:23:43.447976    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:23:43.448093    4656 main.go:141] libmachine: Using SSH client type: native
	I0816 10:23:43.448217    4656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e87ea0] 0x3e8ac00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:23:43.448225    4656 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0816 10:23:43.510056    4656 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0816 10:23:43.510072    4656 buildroot.go:70] root file system type: tmpfs
	I0816 10:23:43.510138    4656 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0816 10:23:43.510152    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:23:43.510280    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:23:43.510367    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:23:43.510466    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:23:43.510546    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:23:43.510704    4656 main.go:141] libmachine: Using SSH client type: native
	I0816 10:23:43.510847    4656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e87ea0] 0x3e8ac00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:23:43.510894    4656 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0816 10:23:43.585463    4656 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0816 10:23:43.585485    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:23:43.585612    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:23:43.585708    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:23:43.585797    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:23:43.585881    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:23:43.585994    4656 main.go:141] libmachine: Using SSH client type: native
	I0816 10:23:43.586142    4656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e87ea0] 0x3e8ac00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:23:43.586155    4656 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0816 10:23:45.281245    4656 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0816 10:23:45.281272    4656 machine.go:96] duration metric: took 13.233954511s to provisionDockerMachine
	I0816 10:23:45.281282    4656 start.go:293] postStartSetup for "ha-286000" (driver="hyperkit")
	I0816 10:23:45.281290    4656 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 10:23:45.281301    4656 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:23:45.281477    4656 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 10:23:45.281497    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:23:45.281579    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:23:45.281672    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:23:45.281756    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:23:45.281830    4656 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:23:45.322349    4656 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 10:23:45.325873    4656 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 10:23:45.325888    4656 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19461-1276/.minikube/addons for local assets ...
	I0816 10:23:45.326003    4656 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19461-1276/.minikube/files for local assets ...
	I0816 10:23:45.326184    4656 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> 18312.pem in /etc/ssl/certs
	I0816 10:23:45.326190    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> /etc/ssl/certs/18312.pem
	I0816 10:23:45.326400    4656 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 10:23:45.335377    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem --> /etc/ssl/certs/18312.pem (1708 bytes)
	I0816 10:23:45.364973    4656 start.go:296] duration metric: took 83.714414ms for postStartSetup
	I0816 10:23:45.365002    4656 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:23:45.365179    4656 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0816 10:23:45.365192    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:23:45.365284    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:23:45.365363    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:23:45.365463    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:23:45.365567    4656 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:23:45.403540    4656 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0816 10:23:45.403604    4656 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0816 10:23:45.456725    4656 fix.go:56] duration metric: took 13.637911557s for fixHost
	I0816 10:23:45.456746    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:23:45.456881    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:23:45.456970    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:23:45.457077    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:23:45.457170    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:23:45.457308    4656 main.go:141] libmachine: Using SSH client type: native
	I0816 10:23:45.457449    4656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e87ea0] 0x3e8ac00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 10:23:45.457456    4656 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 10:23:45.520497    4656 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723829025.657632114
	
	I0816 10:23:45.520510    4656 fix.go:216] guest clock: 1723829025.657632114
	I0816 10:23:45.520516    4656 fix.go:229] Guest: 2024-08-16 10:23:45.657632114 -0700 PDT Remote: 2024-08-16 10:23:45.456737 -0700 PDT m=+14.070866227 (delta=200.895114ms)
	I0816 10:23:45.520533    4656 fix.go:200] guest clock delta is within tolerance: 200.895114ms
	I0816 10:23:45.520536    4656 start.go:83] releasing machines lock for "ha-286000", held for 13.701786252s
	I0816 10:23:45.520558    4656 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:23:45.520685    4656 main.go:141] libmachine: (ha-286000) Calling .GetIP
	I0816 10:23:45.520780    4656 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:23:45.521071    4656 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:23:45.521183    4656 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:23:45.521258    4656 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 10:23:45.521295    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:23:45.521314    4656 ssh_runner.go:195] Run: cat /version.json
	I0816 10:23:45.521325    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:23:45.521385    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:23:45.521413    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:23:45.521478    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:23:45.521492    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:23:45.521569    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:23:45.521588    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:23:45.521684    4656 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:23:45.521698    4656 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:23:45.608738    4656 ssh_runner.go:195] Run: systemctl --version
	I0816 10:23:45.613819    4656 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 10:23:45.618009    4656 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 10:23:45.618054    4656 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 10:23:45.630928    4656 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 10:23:45.630940    4656 start.go:495] detecting cgroup driver to use...
	I0816 10:23:45.631050    4656 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 10:23:45.647297    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0816 10:23:45.656185    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0816 10:23:45.664870    4656 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0816 10:23:45.664909    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0816 10:23:45.673735    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 10:23:45.682541    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0816 10:23:45.691093    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 10:23:45.699692    4656 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 10:23:45.708389    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0816 10:23:45.717214    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0816 10:23:45.726031    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0816 10:23:45.734772    4656 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 10:23:45.742525    4656 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 10:23:45.750474    4656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:23:45.857037    4656 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0816 10:23:45.876038    4656 start.go:495] detecting cgroup driver to use...
	I0816 10:23:45.876115    4656 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0816 10:23:45.891371    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 10:23:45.904769    4656 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 10:23:45.925222    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 10:23:45.935653    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 10:23:45.946111    4656 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0816 10:23:45.966114    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 10:23:45.976753    4656 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 10:23:45.991951    4656 ssh_runner.go:195] Run: which cri-dockerd
	I0816 10:23:45.995087    4656 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0816 10:23:46.002262    4656 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0816 10:23:46.015662    4656 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0816 10:23:46.113010    4656 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0816 10:23:46.220102    4656 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0816 10:23:46.220181    4656 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0816 10:23:46.234448    4656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:23:46.327392    4656 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0816 10:23:48.670555    4656 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.343962753s)
	I0816 10:23:48.670612    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0816 10:23:48.681270    4656 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0816 10:23:48.694180    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0816 10:23:48.704525    4656 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0816 10:23:48.796386    4656 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0816 10:23:48.896301    4656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:23:49.015732    4656 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0816 10:23:49.029308    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0816 10:23:49.039437    4656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:23:49.133284    4656 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0816 10:23:49.196413    4656 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0816 10:23:49.196492    4656 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0816 10:23:49.200987    4656 start.go:563] Will wait 60s for crictl version
	I0816 10:23:49.201034    4656 ssh_runner.go:195] Run: which crictl
	I0816 10:23:49.204272    4656 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 10:23:49.229772    4656 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0816 10:23:49.229851    4656 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0816 10:23:49.247799    4656 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0816 10:23:49.310834    4656 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0816 10:23:49.310884    4656 main.go:141] libmachine: (ha-286000) Calling .GetIP
	I0816 10:23:49.311324    4656 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0816 10:23:49.315940    4656 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 10:23:49.325830    4656 kubeadm.go:883] updating cluster {Name:ha-286000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:ha-286000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false f
reshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 10:23:49.325921    4656 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 10:23:49.325979    4656 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0816 10:23:49.344604    4656 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0816 10:23:49.344616    4656 docker.go:615] Images already preloaded, skipping extraction
	I0816 10:23:49.344689    4656 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0816 10:23:49.358019    4656 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0816 10:23:49.358039    4656 cache_images.go:84] Images are preloaded, skipping loading
	I0816 10:23:49.358049    4656 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.0 docker true true} ...
	I0816 10:23:49.358133    4656 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-286000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-286000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 10:23:49.358200    4656 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0816 10:23:49.396733    4656 cni.go:84] Creating CNI manager for ""
	I0816 10:23:49.396746    4656 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0816 10:23:49.396758    4656 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 10:23:49.396773    4656 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-286000 NodeName:ha-286000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 10:23:49.396858    4656 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-286000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 10:23:49.396876    4656 kube-vip.go:115] generating kube-vip config ...
	I0816 10:23:49.396930    4656 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0816 10:23:49.409760    4656 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0816 10:23:49.409827    4656 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0816 10:23:49.409880    4656 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 10:23:49.417741    4656 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 10:23:49.417784    4656 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0816 10:23:49.425178    4656 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0816 10:23:49.438709    4656 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 10:23:49.451834    4656 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0816 10:23:49.465615    4656 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0816 10:23:49.478992    4656 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0816 10:23:49.481872    4656 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 10:23:49.491581    4656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:23:49.591270    4656 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 10:23:49.605166    4656 certs.go:68] Setting up /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000 for IP: 192.169.0.5
	I0816 10:23:49.605178    4656 certs.go:194] generating shared ca certs ...
	I0816 10:23:49.605204    4656 certs.go:226] acquiring lock for ca certs: {Name:mka549fbc467849834d2267da204028c49d88a3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:23:49.605373    4656 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key
	I0816 10:23:49.605447    4656 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key
	I0816 10:23:49.605458    4656 certs.go:256] generating profile certs ...
	I0816 10:23:49.605548    4656 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.key
	I0816 10:23:49.605569    4656 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.1abcce66
	I0816 10:23:49.605590    4656 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.1abcce66 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.7 192.169.0.254]
	I0816 10:23:49.872724    4656 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.1abcce66 ...
	I0816 10:23:49.872746    4656 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.1abcce66: {Name:mk52a3c288948ed76c5e0c3d52d6b4bf6d85dac4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:23:49.873234    4656 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.1abcce66 ...
	I0816 10:23:49.873246    4656 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.1abcce66: {Name:mk4d6d8f8e53e86a8e5b1aff2a47e28c9af375aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:23:49.873462    4656 certs.go:381] copying /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.1abcce66 -> /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt
	I0816 10:23:49.873670    4656 certs.go:385] copying /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.1abcce66 -> /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key
	I0816 10:23:49.873917    4656 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key
	I0816 10:23:49.873927    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0816 10:23:49.873950    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0816 10:23:49.873969    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0816 10:23:49.873988    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0816 10:23:49.874005    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0816 10:23:49.874022    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0816 10:23:49.874039    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0816 10:23:49.874056    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0816 10:23:49.874155    4656 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem (1338 bytes)
	W0816 10:23:49.874204    4656 certs.go:480] ignoring /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831_empty.pem, impossibly tiny 0 bytes
	I0816 10:23:49.874213    4656 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 10:23:49.874243    4656 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem (1078 bytes)
	I0816 10:23:49.874272    4656 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem (1123 bytes)
	I0816 10:23:49.874303    4656 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem (1679 bytes)
	I0816 10:23:49.874365    4656 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem (1708 bytes)
	I0816 10:23:49.874404    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem -> /usr/share/ca-certificates/1831.pem
	I0816 10:23:49.874426    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> /usr/share/ca-certificates/18312.pem
	I0816 10:23:49.874445    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:23:49.874951    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 10:23:49.894591    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0816 10:23:49.949362    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 10:23:50.001129    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 10:23:50.031447    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0816 10:23:50.051861    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 10:23:50.072126    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 10:23:50.092020    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 10:23:50.111735    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem --> /usr/share/ca-certificates/1831.pem (1338 bytes)
	I0816 10:23:50.131448    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem --> /usr/share/ca-certificates/18312.pem (1708 bytes)
	I0816 10:23:50.150204    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 10:23:50.170431    4656 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 10:23:50.183792    4656 ssh_runner.go:195] Run: openssl version
	I0816 10:23:50.188069    4656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18312.pem && ln -fs /usr/share/ca-certificates/18312.pem /etc/ssl/certs/18312.pem"
	I0816 10:23:50.196462    4656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18312.pem
	I0816 10:23:50.199930    4656 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 16:57 /usr/share/ca-certificates/18312.pem
	I0816 10:23:50.199966    4656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18312.pem
	I0816 10:23:50.204340    4656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18312.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 10:23:50.212595    4656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 10:23:50.220934    4656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:23:50.224472    4656 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 16:48 /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:23:50.224507    4656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:23:50.228762    4656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 10:23:50.237224    4656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1831.pem && ln -fs /usr/share/ca-certificates/1831.pem /etc/ssl/certs/1831.pem"
	I0816 10:23:50.245558    4656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1831.pem
	I0816 10:23:50.249052    4656 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 16:57 /usr/share/ca-certificates/1831.pem
	I0816 10:23:50.249090    4656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1831.pem
	I0816 10:23:50.253505    4656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1831.pem /etc/ssl/certs/51391683.0"
	I0816 10:23:50.261784    4656 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 10:23:50.265339    4656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 10:23:50.269761    4656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 10:23:50.273967    4656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 10:23:50.278404    4656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 10:23:50.282734    4656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 10:23:50.286959    4656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 10:23:50.291328    4656 kubeadm.go:392] StartCluster: {Name:ha-286000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:ha-286000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 10:23:50.291439    4656 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0816 10:23:50.308917    4656 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 10:23:50.316477    4656 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 10:23:50.316487    4656 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 10:23:50.316521    4656 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 10:23:50.324768    4656 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 10:23:50.325077    4656 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-286000" does not appear in /Users/jenkins/minikube-integration/19461-1276/kubeconfig
	I0816 10:23:50.325160    4656 kubeconfig.go:62] /Users/jenkins/minikube-integration/19461-1276/kubeconfig needs updating (will repair): [kubeconfig missing "ha-286000" cluster setting kubeconfig missing "ha-286000" context setting]
	I0816 10:23:50.325346    4656 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/kubeconfig: {Name:mk7f4491c8ec5cf96c96a5a1a5dbfba4301394a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:23:50.325844    4656 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19461-1276/kubeconfig
	I0816 10:23:50.326042    4656 kapi.go:59] client config for ha-286000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.key", CAFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x5540f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0816 10:23:50.326340    4656 cert_rotation.go:140] Starting client certificate rotation controller
	I0816 10:23:50.326539    4656 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 10:23:50.333744    4656 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.5
	I0816 10:23:50.333758    4656 kubeadm.go:597] duration metric: took 17.27164ms to restartPrimaryControlPlane
	I0816 10:23:50.333763    4656 kubeadm.go:394] duration metric: took 42.452811ms to StartCluster
	I0816 10:23:50.333775    4656 settings.go:142] acquiring lock: {Name:mk60e377244eac756871b74b6bd45115654652a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:23:50.333847    4656 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19461-1276/kubeconfig
	I0816 10:23:50.334196    4656 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/kubeconfig: {Name:mk7f4491c8ec5cf96c96a5a1a5dbfba4301394a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:23:50.334417    4656 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:23:50.334430    4656 start.go:241] waiting for startup goroutines ...
	I0816 10:23:50.334436    4656 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 10:23:50.334546    4656 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:23:50.378007    4656 out.go:177] * Enabled addons: 
	I0816 10:23:50.399051    4656 addons.go:510] duration metric: took 64.628768ms for enable addons: enabled=[]
	I0816 10:23:50.399122    4656 start.go:246] waiting for cluster config update ...
	I0816 10:23:50.399134    4656 start.go:255] writing updated cluster config ...
	I0816 10:23:50.421150    4656 out.go:201] 
	I0816 10:23:50.443594    4656 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:23:50.443722    4656 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:23:50.466091    4656 out.go:177] * Starting "ha-286000-m02" control-plane node in "ha-286000" cluster
	I0816 10:23:50.507896    4656 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 10:23:50.507978    4656 cache.go:56] Caching tarball of preloaded images
	I0816 10:23:50.508166    4656 preload.go:172] Found /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0816 10:23:50.508183    4656 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 10:23:50.508305    4656 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:23:50.509238    4656 start.go:360] acquireMachinesLock for ha-286000-m02: {Name:mk0510f0432b1e24fd07d899155e85dff8756d5a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:23:50.509340    4656 start.go:364] duration metric: took 77.349µs to acquireMachinesLock for "ha-286000-m02"
	I0816 10:23:50.509364    4656 start.go:96] Skipping create...Using existing machine configuration
	I0816 10:23:50.509373    4656 fix.go:54] fixHost starting: m02
	I0816 10:23:50.509785    4656 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:23:50.509813    4656 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:23:50.519278    4656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52185
	I0816 10:23:50.519808    4656 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:23:50.520224    4656 main.go:141] libmachine: Using API Version  1
	I0816 10:23:50.520241    4656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:23:50.520527    4656 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:23:50.520742    4656 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:23:50.520847    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetState
	I0816 10:23:50.520930    4656 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:23:50.521027    4656 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid from json: 4408
	I0816 10:23:50.521973    4656 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid 4408 missing from process table
	I0816 10:23:50.522001    4656 fix.go:112] recreateIfNeeded on ha-286000-m02: state=Stopped err=<nil>
	I0816 10:23:50.522008    4656 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	W0816 10:23:50.522113    4656 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 10:23:50.564905    4656 out.go:177] * Restarting existing hyperkit VM for "ha-286000-m02" ...
	I0816 10:23:50.585936    4656 main.go:141] libmachine: (ha-286000-m02) Calling .Start
	I0816 10:23:50.586207    4656 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:23:50.586317    4656 main.go:141] libmachine: (ha-286000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/hyperkit.pid
	I0816 10:23:50.588008    4656 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid 4408 missing from process table
	I0816 10:23:50.588025    4656 main.go:141] libmachine: (ha-286000-m02) DBG | pid 4408 is in state "Stopped"
	I0816 10:23:50.588043    4656 main.go:141] libmachine: (ha-286000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/hyperkit.pid...
	I0816 10:23:50.588412    4656 main.go:141] libmachine: (ha-286000-m02) DBG | Using UUID f7630301-2bc3-4935-a751-ac72999da031
	I0816 10:23:50.615912    4656 main.go:141] libmachine: (ha-286000-m02) DBG | Generated MAC 72:69:8f:11:68:1d
	I0816 10:23:50.615934    4656 main.go:141] libmachine: (ha-286000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000
	I0816 10:23:50.616061    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:50 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"f7630301-2bc3-4935-a751-ac72999da031", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003aca20)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 10:23:50.616091    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:50 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"f7630301-2bc3-4935-a751-ac72999da031", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003aca20)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 10:23:50.616153    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:50 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "f7630301-2bc3-4935-a751-ac72999da031", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/ha-286000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/bzimage,/Users/jenkins/minikube-integration/19461-1276/.minikube/machine
s/ha-286000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000"}
	I0816 10:23:50.616186    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:50 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U f7630301-2bc3-4935-a751-ac72999da031 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/ha-286000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/bzimage,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000"
	I0816 10:23:50.616197    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:50 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0816 10:23:50.617617    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:50 DEBUG: hyperkit: Pid is 4678
	I0816 10:23:50.618129    4656 main.go:141] libmachine: (ha-286000-m02) DBG | Attempt 0
	I0816 10:23:50.618145    4656 main.go:141] libmachine: (ha-286000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:23:50.618226    4656 main.go:141] libmachine: (ha-286000-m02) DBG | hyperkit pid from json: 4678
	I0816 10:23:50.620253    4656 main.go:141] libmachine: (ha-286000-m02) DBG | Searching for 72:69:8f:11:68:1d in /var/db/dhcpd_leases ...
	I0816 10:23:50.620318    4656 main.go:141] libmachine: (ha-286000-m02) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0816 10:23:50.620334    4656 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:23:50.620349    4656 main.go:141] libmachine: (ha-286000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dbd7}
	I0816 10:23:50.620388    4656 main.go:141] libmachine: (ha-286000-m02) DBG | Found match: 72:69:8f:11:68:1d
	I0816 10:23:50.620402    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetConfigRaw
	I0816 10:23:50.620404    4656 main.go:141] libmachine: (ha-286000-m02) DBG | IP: 192.169.0.6
	I0816 10:23:50.621061    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetIP
	I0816 10:23:50.621271    4656 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:23:50.621639    4656 machine.go:93] provisionDockerMachine start ...
	I0816 10:23:50.621648    4656 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:23:50.621787    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:23:50.621898    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:23:50.622018    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:23:50.622130    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:23:50.622215    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:23:50.622373    4656 main.go:141] libmachine: Using SSH client type: native
	I0816 10:23:50.622508    4656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e87ea0] 0x3e8ac00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:23:50.622515    4656 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 10:23:50.625610    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:50 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0816 10:23:50.635240    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:50 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0816 10:23:50.636222    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:23:50.636239    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:23:50.636256    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:23:50.636268    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:23:51.016978    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:51 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0816 10:23:51.016996    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:51 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0816 10:23:51.131867    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:51 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:23:51.131882    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:51 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:23:51.131905    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:51 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:23:51.131915    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:51 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:23:51.132722    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:51 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0816 10:23:51.132732    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:51 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0816 10:23:56.691144    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:56 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0816 10:23:56.691211    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:56 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0816 10:23:56.691221    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:56 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0816 10:23:56.715157    4656 main.go:141] libmachine: (ha-286000-m02) DBG | 2024/08/16 10:23:56 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0816 10:24:01.691628    4656 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 10:24:01.691659    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetMachineName
	I0816 10:24:01.691824    4656 buildroot.go:166] provisioning hostname "ha-286000-m02"
	I0816 10:24:01.691835    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetMachineName
	I0816 10:24:01.691933    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:24:01.692024    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:24:01.692118    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:24:01.692216    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:24:01.692322    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:24:01.692468    4656 main.go:141] libmachine: Using SSH client type: native
	I0816 10:24:01.692634    4656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e87ea0] 0x3e8ac00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:24:01.692662    4656 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-286000-m02 && echo "ha-286000-m02" | sudo tee /etc/hostname
	I0816 10:24:01.771215    4656 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-286000-m02
	
	I0816 10:24:01.771228    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:24:01.771358    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:24:01.771450    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:24:01.771545    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:24:01.771647    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:24:01.771778    4656 main.go:141] libmachine: Using SSH client type: native
	I0816 10:24:01.771942    4656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e87ea0] 0x3e8ac00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:24:01.771954    4656 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-286000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-286000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-286000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 10:24:01.843105    4656 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 10:24:01.843122    4656 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19461-1276/.minikube CaCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19461-1276/.minikube}
	I0816 10:24:01.843132    4656 buildroot.go:174] setting up certificates
	I0816 10:24:01.843138    4656 provision.go:84] configureAuth start
	I0816 10:24:01.843144    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetMachineName
	I0816 10:24:01.843278    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetIP
	I0816 10:24:01.843379    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:24:01.843473    4656 provision.go:143] copyHostCerts
	I0816 10:24:01.843506    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem
	I0816 10:24:01.843559    4656 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem, removing ...
	I0816 10:24:01.843565    4656 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem
	I0816 10:24:01.843699    4656 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem (1078 bytes)
	I0816 10:24:01.843904    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem
	I0816 10:24:01.843934    4656 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem, removing ...
	I0816 10:24:01.843938    4656 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem
	I0816 10:24:01.844006    4656 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem (1123 bytes)
	I0816 10:24:01.844155    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem
	I0816 10:24:01.844183    4656 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem, removing ...
	I0816 10:24:01.844188    4656 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem
	I0816 10:24:01.844260    4656 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem (1679 bytes)
	I0816 10:24:01.844439    4656 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem org=jenkins.ha-286000-m02 san=[127.0.0.1 192.169.0.6 ha-286000-m02 localhost minikube]
	I0816 10:24:02.337393    4656 provision.go:177] copyRemoteCerts
	I0816 10:24:02.337441    4656 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 10:24:02.337455    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:24:02.337604    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:24:02.337706    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:24:02.337804    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:24:02.337897    4656 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/id_rsa Username:docker}
	I0816 10:24:02.378639    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0816 10:24:02.378714    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 10:24:02.398417    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0816 10:24:02.398480    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0816 10:24:02.418213    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0816 10:24:02.418277    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0816 10:24:02.438096    4656 provision.go:87] duration metric: took 595.044673ms to configureAuth
	I0816 10:24:02.438110    4656 buildroot.go:189] setting minikube options for container-runtime
	I0816 10:24:02.438277    4656 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:24:02.438294    4656 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:24:02.438430    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:24:02.438542    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:24:02.438634    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:24:02.438711    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:24:02.438803    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:24:02.438923    4656 main.go:141] libmachine: Using SSH client type: native
	I0816 10:24:02.439049    4656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e87ea0] 0x3e8ac00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:24:02.439057    4656 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0816 10:24:02.506619    4656 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0816 10:24:02.506630    4656 buildroot.go:70] root file system type: tmpfs
	I0816 10:24:02.506699    4656 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0816 10:24:02.506717    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:24:02.506855    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:24:02.506952    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:24:02.507065    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:24:02.507163    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:24:02.507316    4656 main.go:141] libmachine: Using SSH client type: native
	I0816 10:24:02.507497    4656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e87ea0] 0x3e8ac00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:24:02.507542    4656 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0816 10:24:02.585569    4656 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0816 10:24:02.585592    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:24:02.585731    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:24:02.585811    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:24:02.585904    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:24:02.585995    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:24:02.586114    4656 main.go:141] libmachine: Using SSH client type: native
	I0816 10:24:02.586256    4656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e87ea0] 0x3e8ac00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:24:02.586268    4656 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0816 10:24:04.282251    4656 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0816 10:24:04.282267    4656 machine.go:96] duration metric: took 13.663433605s to provisionDockerMachine
	I0816 10:24:04.282274    4656 start.go:293] postStartSetup for "ha-286000-m02" (driver="hyperkit")
	I0816 10:24:04.282282    4656 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 10:24:04.282291    4656 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:24:04.282476    4656 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 10:24:04.282490    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:24:04.282590    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:24:04.282676    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:24:04.282759    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:24:04.282862    4656 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/id_rsa Username:docker}
	I0816 10:24:04.323177    4656 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 10:24:04.326227    4656 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 10:24:04.326238    4656 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19461-1276/.minikube/addons for local assets ...
	I0816 10:24:04.326327    4656 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19461-1276/.minikube/files for local assets ...
	I0816 10:24:04.326475    4656 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> 18312.pem in /etc/ssl/certs
	I0816 10:24:04.326481    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> /etc/ssl/certs/18312.pem
	I0816 10:24:04.326635    4656 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 10:24:04.333923    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem --> /etc/ssl/certs/18312.pem (1708 bytes)
	I0816 10:24:04.354007    4656 start.go:296] duration metric: took 71.735624ms for postStartSetup
	I0816 10:24:04.354029    4656 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:24:04.354205    4656 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0816 10:24:04.354219    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:24:04.354303    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:24:04.354400    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:24:04.354484    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:24:04.354570    4656 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/id_rsa Username:docker}
	I0816 10:24:04.394664    4656 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0816 10:24:04.394719    4656 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0816 10:24:04.426272    4656 fix.go:56] duration metric: took 13.919762029s for fixHost
	I0816 10:24:04.426298    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:24:04.426444    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:24:04.426552    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:24:04.426653    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:24:04.426754    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:24:04.426882    4656 main.go:141] libmachine: Using SSH client type: native
	I0816 10:24:04.427028    4656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e87ea0] 0x3e8ac00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 10:24:04.427036    4656 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 10:24:04.493811    4656 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723829044.518955224
	
	I0816 10:24:04.493822    4656 fix.go:216] guest clock: 1723829044.518955224
	I0816 10:24:04.493832    4656 fix.go:229] Guest: 2024-08-16 10:24:04.518955224 -0700 PDT Remote: 2024-08-16 10:24:04.426286 -0700 PDT m=+33.045019463 (delta=92.669224ms)
	I0816 10:24:04.493843    4656 fix.go:200] guest clock delta is within tolerance: 92.669224ms
	I0816 10:24:04.493847    4656 start.go:83] releasing machines lock for "ha-286000-m02", held for 13.987372778s
	I0816 10:24:04.493864    4656 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:24:04.494002    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetIP
	I0816 10:24:04.518312    4656 out.go:177] * Found network options:
	I0816 10:24:04.540563    4656 out.go:177]   - NO_PROXY=192.169.0.5
	W0816 10:24:04.562476    4656 proxy.go:119] fail to check proxy env: Error ip not in block
	I0816 10:24:04.562514    4656 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:24:04.563369    4656 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:24:04.563631    4656 main.go:141] libmachine: (ha-286000-m02) Calling .DriverName
	I0816 10:24:04.563760    4656 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 10:24:04.563821    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	W0816 10:24:04.563878    4656 proxy.go:119] fail to check proxy env: Error ip not in block
	I0816 10:24:04.563978    4656 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0816 10:24:04.563994    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:24:04.563998    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHHostname
	I0816 10:24:04.564194    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:24:04.564230    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHPort
	I0816 10:24:04.564370    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:24:04.564412    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHKeyPath
	I0816 10:24:04.564603    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetSSHUsername
	I0816 10:24:04.564677    4656 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/id_rsa Username:docker}
	I0816 10:24:04.564735    4656 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m02/id_rsa Username:docker}
	W0816 10:24:04.601353    4656 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 10:24:04.601410    4656 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 10:24:04.653940    4656 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 10:24:04.653960    4656 start.go:495] detecting cgroup driver to use...
	I0816 10:24:04.654084    4656 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 10:24:04.669702    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0816 10:24:04.678676    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0816 10:24:04.687652    4656 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0816 10:24:04.687695    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0816 10:24:04.696611    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 10:24:04.705567    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0816 10:24:04.714412    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 10:24:04.723256    4656 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 10:24:04.732202    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0816 10:24:04.746674    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0816 10:24:04.757904    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0816 10:24:04.767905    4656 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 10:24:04.779013    4656 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 10:24:04.790474    4656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:24:04.892919    4656 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0816 10:24:04.911874    4656 start.go:495] detecting cgroup driver to use...
	I0816 10:24:04.911946    4656 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0816 10:24:04.929416    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 10:24:04.941191    4656 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 10:24:04.954835    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 10:24:04.965605    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 10:24:04.976040    4656 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0816 10:24:05.001090    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 10:24:05.011999    4656 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 10:24:05.026893    4656 ssh_runner.go:195] Run: which cri-dockerd
	I0816 10:24:05.029920    4656 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0816 10:24:05.037094    4656 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0816 10:24:05.050742    4656 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0816 10:24:05.142175    4656 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0816 10:24:05.247816    4656 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0816 10:24:05.247843    4656 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0816 10:24:05.261875    4656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:24:05.354182    4656 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0816 10:24:07.691138    4656 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.337231062s)
	I0816 10:24:07.691198    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0816 10:24:07.701875    4656 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0816 10:24:07.715113    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0816 10:24:07.725351    4656 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0816 10:24:07.820462    4656 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0816 10:24:07.932462    4656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:24:08.044265    4656 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0816 10:24:08.057914    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0816 10:24:08.069171    4656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:24:08.165855    4656 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0816 10:24:08.229743    4656 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0816 10:24:08.229822    4656 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0816 10:24:08.234625    4656 start.go:563] Will wait 60s for crictl version
	I0816 10:24:08.234677    4656 ssh_runner.go:195] Run: which crictl
	I0816 10:24:08.237852    4656 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 10:24:08.262491    4656 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0816 10:24:08.262569    4656 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0816 10:24:08.282005    4656 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0816 10:24:08.324107    4656 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0816 10:24:08.365750    4656 out.go:177]   - env NO_PROXY=192.169.0.5
	I0816 10:24:08.386602    4656 main.go:141] libmachine: (ha-286000-m02) Calling .GetIP
	I0816 10:24:08.387035    4656 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0816 10:24:08.391617    4656 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 10:24:08.401981    4656 mustload.go:65] Loading cluster: ha-286000
	I0816 10:24:08.402159    4656 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:24:08.402381    4656 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:24:08.402414    4656 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:24:08.411266    4656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52207
	I0816 10:24:08.411600    4656 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:24:08.411912    4656 main.go:141] libmachine: Using API Version  1
	I0816 10:24:08.411923    4656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:24:08.412158    4656 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:24:08.412273    4656 main.go:141] libmachine: (ha-286000) Calling .GetState
	I0816 10:24:08.412350    4656 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:24:08.412439    4656 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 4669
	I0816 10:24:08.413371    4656 host.go:66] Checking if "ha-286000" exists ...
	I0816 10:24:08.413648    4656 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:24:08.413671    4656 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:24:08.422352    4656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52209
	I0816 10:24:08.422710    4656 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:24:08.423035    4656 main.go:141] libmachine: Using API Version  1
	I0816 10:24:08.423046    4656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:24:08.423253    4656 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:24:08.423365    4656 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:24:08.423454    4656 certs.go:68] Setting up /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000 for IP: 192.169.0.6
	I0816 10:24:08.423460    4656 certs.go:194] generating shared ca certs ...
	I0816 10:24:08.423469    4656 certs.go:226] acquiring lock for ca certs: {Name:mka549fbc467849834d2267da204028c49d88a3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:24:08.423616    4656 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key
	I0816 10:24:08.423685    4656 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key
	I0816 10:24:08.423693    4656 certs.go:256] generating profile certs ...
	I0816 10:24:08.423785    4656 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.key
	I0816 10:24:08.423872    4656 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.df014ba6
	I0816 10:24:08.423924    4656 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key
	I0816 10:24:08.423931    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0816 10:24:08.423952    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0816 10:24:08.423978    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0816 10:24:08.423996    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0816 10:24:08.424013    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0816 10:24:08.424031    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0816 10:24:08.424049    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0816 10:24:08.424065    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0816 10:24:08.424139    4656 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem (1338 bytes)
	W0816 10:24:08.424181    4656 certs.go:480] ignoring /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831_empty.pem, impossibly tiny 0 bytes
	I0816 10:24:08.424189    4656 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 10:24:08.424243    4656 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem (1078 bytes)
	I0816 10:24:08.424278    4656 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem (1123 bytes)
	I0816 10:24:08.424308    4656 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem (1679 bytes)
	I0816 10:24:08.424377    4656 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem (1708 bytes)
	I0816 10:24:08.424414    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem -> /usr/share/ca-certificates/1831.pem
	I0816 10:24:08.424439    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> /usr/share/ca-certificates/18312.pem
	I0816 10:24:08.424464    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:24:08.424490    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:24:08.424585    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:24:08.424670    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:24:08.424754    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:24:08.424829    4656 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:24:08.455631    4656 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0816 10:24:08.459165    4656 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0816 10:24:08.467170    4656 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0816 10:24:08.470222    4656 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0816 10:24:08.478239    4656 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0816 10:24:08.481358    4656 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0816 10:24:08.489236    4656 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0816 10:24:08.492402    4656 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0816 10:24:08.500317    4656 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0816 10:24:08.503508    4656 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0816 10:24:08.511673    4656 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0816 10:24:08.514769    4656 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0816 10:24:08.522766    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 10:24:08.542887    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0816 10:24:08.562071    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 10:24:08.581743    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 10:24:08.600945    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0816 10:24:08.620933    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 10:24:08.640254    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 10:24:08.659444    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 10:24:08.678715    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem --> /usr/share/ca-certificates/1831.pem (1338 bytes)
	I0816 10:24:08.697527    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem --> /usr/share/ca-certificates/18312.pem (1708 bytes)
	I0816 10:24:08.716988    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 10:24:08.735913    4656 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0816 10:24:08.749507    4656 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0816 10:24:08.763125    4656 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0816 10:24:08.776902    4656 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0816 10:24:08.790611    4656 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0816 10:24:08.804538    4656 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0816 10:24:08.817970    4656 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0816 10:24:08.831472    4656 ssh_runner.go:195] Run: openssl version
	I0816 10:24:08.835773    4656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18312.pem && ln -fs /usr/share/ca-certificates/18312.pem /etc/ssl/certs/18312.pem"
	I0816 10:24:08.845139    4656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18312.pem
	I0816 10:24:08.848508    4656 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 16:57 /usr/share/ca-certificates/18312.pem
	I0816 10:24:08.848545    4656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18312.pem
	I0816 10:24:08.852837    4656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18312.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 10:24:08.861881    4656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 10:24:08.870959    4656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:24:08.874362    4656 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 16:48 /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:24:08.874393    4656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:24:08.878676    4656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 10:24:08.887721    4656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1831.pem && ln -fs /usr/share/ca-certificates/1831.pem /etc/ssl/certs/1831.pem"
	I0816 10:24:08.896767    4656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1831.pem
	I0816 10:24:08.900184    4656 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 16:57 /usr/share/ca-certificates/1831.pem
	I0816 10:24:08.900218    4656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1831.pem
	I0816 10:24:08.904590    4656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1831.pem /etc/ssl/certs/51391683.0"
	I0816 10:24:08.913817    4656 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 10:24:08.917320    4656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 10:24:08.921592    4656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 10:24:08.925840    4656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 10:24:08.930232    4656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 10:24:08.934401    4656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 10:24:08.938749    4656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 10:24:08.943061    4656 kubeadm.go:934] updating node {m02 192.169.0.6 8443 v1.31.0 docker true true} ...
	I0816 10:24:08.943117    4656 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-286000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-286000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 10:24:08.943138    4656 kube-vip.go:115] generating kube-vip config ...
	I0816 10:24:08.943173    4656 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0816 10:24:08.956099    4656 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0816 10:24:08.956137    4656 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0816 10:24:08.956187    4656 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 10:24:08.964732    4656 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 10:24:08.964780    4656 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0816 10:24:08.972962    4656 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0816 10:24:08.986351    4656 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 10:24:08.999555    4656 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0816 10:24:09.013514    4656 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0816 10:24:09.016494    4656 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 10:24:09.026607    4656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:24:09.119324    4656 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 10:24:09.134140    4656 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:24:09.134339    4656 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:24:09.155614    4656 out.go:177] * Verifying Kubernetes components...
	I0816 10:24:09.197468    4656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:24:09.303306    4656 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 10:24:09.318292    4656 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19461-1276/kubeconfig
	I0816 10:24:09.318481    4656 kapi.go:59] client config for ha-286000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.key", CAFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x5540f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0816 10:24:09.318519    4656 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0816 10:24:09.318689    4656 node_ready.go:35] waiting up to 6m0s for node "ha-286000-m02" to be "Ready" ...
	I0816 10:24:09.318767    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:24:09.318772    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:09.318780    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:09.318783    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:18.478519    4656 round_trippers.go:574] Response Status: 200 OK in 9160 milliseconds
	I0816 10:24:18.479788    4656 node_ready.go:49] node "ha-286000-m02" has status "Ready":"True"
	I0816 10:24:18.479801    4656 node_ready.go:38] duration metric: took 9.161930596s for node "ha-286000-m02" to be "Ready" ...
	I0816 10:24:18.479809    4656 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 10:24:18.479841    4656 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0816 10:24:18.479849    4656 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0816 10:24:18.479888    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0816 10:24:18.479893    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:18.479899    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:18.479903    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:18.524673    4656 round_trippers.go:574] Response Status: 200 OK in 44 milliseconds
	I0816 10:24:18.529733    4656 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-2kqjf" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:18.529785    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-2kqjf
	I0816 10:24:18.529790    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:18.529807    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:18.529813    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:18.533009    4656 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:24:18.533408    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:24:18.533415    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:18.533421    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:18.533425    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:18.536536    4656 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:24:18.536873    4656 pod_ready.go:93] pod "coredns-6f6b679f8f-2kqjf" in "kube-system" namespace has status "Ready":"True"
	I0816 10:24:18.536881    4656 pod_ready.go:82] duration metric: took 7.13625ms for pod "coredns-6f6b679f8f-2kqjf" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:18.536890    4656 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-rfbz7" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:18.536923    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-rfbz7
	I0816 10:24:18.536928    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:18.536933    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:18.536936    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:18.538881    4656 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:24:18.539268    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:24:18.539275    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:18.539280    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:18.539283    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:18.541207    4656 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:24:18.541586    4656 pod_ready.go:93] pod "coredns-6f6b679f8f-rfbz7" in "kube-system" namespace has status "Ready":"True"
	I0816 10:24:18.541594    4656 pod_ready.go:82] duration metric: took 4.698747ms for pod "coredns-6f6b679f8f-rfbz7" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:18.541600    4656 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:18.541631    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-286000
	I0816 10:24:18.541636    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:18.541641    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:18.541646    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:18.543814    4656 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:24:18.544226    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:24:18.544232    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:18.544238    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:18.544241    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:18.546294    4656 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:24:18.546667    4656 pod_ready.go:93] pod "etcd-ha-286000" in "kube-system" namespace has status "Ready":"True"
	I0816 10:24:18.546676    4656 pod_ready.go:82] duration metric: took 5.071416ms for pod "etcd-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:18.546683    4656 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:18.546714    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-286000-m02
	I0816 10:24:18.546719    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:18.546724    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:18.546727    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:18.548810    4656 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:24:18.549180    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:24:18.549187    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:18.549193    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:18.549196    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:18.551164    4656 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 10:24:18.551594    4656 pod_ready.go:93] pod "etcd-ha-286000-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 10:24:18.551602    4656 pod_ready.go:82] duration metric: took 4.914791ms for pod "etcd-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:18.551612    4656 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:18.551646    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-286000
	I0816 10:24:18.551651    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:18.551657    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:18.551661    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:18.553736    4656 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:24:18.680501    4656 request.go:632] Waited for 126.254478ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:24:18.680609    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:24:18.680620    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:18.680631    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:18.680639    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:18.684350    4656 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:24:18.684850    4656 pod_ready.go:93] pod "kube-apiserver-ha-286000" in "kube-system" namespace has status "Ready":"True"
	I0816 10:24:18.684859    4656 pod_ready.go:82] duration metric: took 133.250923ms for pod "kube-apiserver-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:18.684865    4656 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:18.880626    4656 request.go:632] Waited for 195.713304ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-286000-m02
	I0816 10:24:18.880742    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-286000-m02
	I0816 10:24:18.880753    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:18.880765    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:18.880778    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:18.884447    4656 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:24:19.081261    4656 request.go:632] Waited for 196.182218ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:24:19.081349    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:24:19.081358    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:19.081368    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:19.081377    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:19.085528    4656 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 10:24:19.085961    4656 pod_ready.go:93] pod "kube-apiserver-ha-286000-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 10:24:19.085970    4656 pod_ready.go:82] duration metric: took 401.129633ms for pod "kube-apiserver-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:19.085977    4656 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:19.279954    4656 request.go:632] Waited for 193.926578ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-286000
	I0816 10:24:19.279986    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-286000
	I0816 10:24:19.279991    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:19.279997    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:19.280003    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:19.283105    4656 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:24:19.480663    4656 request.go:632] Waited for 196.83909ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:24:19.480698    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:24:19.480704    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:19.480710    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:19.480728    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:19.483828    4656 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:24:19.484258    4656 pod_ready.go:93] pod "kube-controller-manager-ha-286000" in "kube-system" namespace has status "Ready":"True"
	I0816 10:24:19.484269    4656 pod_ready.go:82] duration metric: took 398.316107ms for pod "kube-controller-manager-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:19.484276    4656 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:19.681917    4656 request.go:632] Waited for 197.597037ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-286000-m02
	I0816 10:24:19.682075    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-286000-m02
	I0816 10:24:19.682091    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:19.682103    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:19.682113    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:19.686127    4656 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 10:24:19.880667    4656 request.go:632] Waited for 193.865313ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:24:19.880730    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:24:19.880736    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:19.880742    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:19.880750    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:19.884780    4656 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 10:24:19.885298    4656 pod_ready.go:93] pod "kube-controller-manager-ha-286000-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 10:24:19.885308    4656 pod_ready.go:82] duration metric: took 401.055356ms for pod "kube-controller-manager-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:19.885315    4656 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5qhgk" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:20.081205    4656 request.go:632] Waited for 195.805147ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5qhgk
	I0816 10:24:20.081294    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5qhgk
	I0816 10:24:20.081304    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:20.081316    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:20.081321    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:20.085631    4656 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 10:24:20.280455    4656 request.go:632] Waited for 194.474574ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000-m04
	I0816 10:24:20.280532    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m04
	I0816 10:24:20.280539    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:20.280547    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:20.280552    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:20.287097    4656 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0816 10:24:20.287492    4656 pod_ready.go:93] pod "kube-proxy-5qhgk" in "kube-system" namespace has status "Ready":"True"
	I0816 10:24:20.287501    4656 pod_ready.go:82] duration metric: took 402.209883ms for pod "kube-proxy-5qhgk" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:20.287508    4656 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pt669" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:20.480572    4656 request.go:632] Waited for 193.037822ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pt669
	I0816 10:24:20.480648    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pt669
	I0816 10:24:20.480652    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:20.480659    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:20.480663    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:20.483171    4656 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:24:20.681664    4656 request.go:632] Waited for 198.111953ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:24:20.681763    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:24:20.681771    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:20.681779    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:20.681784    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:20.684372    4656 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:24:20.684693    4656 pod_ready.go:93] pod "kube-proxy-pt669" in "kube-system" namespace has status "Ready":"True"
	I0816 10:24:20.684702    4656 pod_ready.go:82] duration metric: took 397.216841ms for pod "kube-proxy-pt669" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:20.684712    4656 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-w4nt2" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:20.879782    4656 request.go:632] Waited for 195.039009ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w4nt2
	I0816 10:24:20.879907    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w4nt2
	I0816 10:24:20.879921    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:20.879933    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:20.879941    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:20.883394    4656 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:24:21.079930    4656 request.go:632] Waited for 195.888686ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:24:21.080028    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:24:21.080039    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:21.080050    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:21.080059    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:21.083488    4656 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:24:21.083893    4656 pod_ready.go:93] pod "kube-proxy-w4nt2" in "kube-system" namespace has status "Ready":"True"
	I0816 10:24:21.083903    4656 pod_ready.go:82] duration metric: took 399.212461ms for pod "kube-proxy-w4nt2" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:21.083911    4656 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:21.281558    4656 request.go:632] Waited for 197.607208ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-286000
	I0816 10:24:21.281628    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-286000
	I0816 10:24:21.281639    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:21.281648    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:21.281654    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:21.284223    4656 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 10:24:21.480419    4656 request.go:632] Waited for 195.838756ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:24:21.480514    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000
	I0816 10:24:21.480525    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:21.480537    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:21.480544    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:21.483887    4656 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:24:21.484430    4656 pod_ready.go:93] pod "kube-scheduler-ha-286000" in "kube-system" namespace has status "Ready":"True"
	I0816 10:24:21.484439    4656 pod_ready.go:82] duration metric: took 400.549346ms for pod "kube-scheduler-ha-286000" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:21.484446    4656 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:21.679727    4656 request.go:632] Waited for 195.252345ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-286000-m02
	I0816 10:24:21.679760    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-286000-m02
	I0816 10:24:21.679765    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:21.679769    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:21.679805    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:21.686476    4656 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0816 10:24:21.880162    4656 request.go:632] Waited for 193.203193ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:24:21.880221    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m02
	I0816 10:24:21.880231    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:21.880247    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:21.880256    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:21.884015    4656 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:24:21.884602    4656 pod_ready.go:93] pod "kube-scheduler-ha-286000-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 10:24:21.884611    4656 pod_ready.go:82] duration metric: took 400.186514ms for pod "kube-scheduler-ha-286000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 10:24:21.884619    4656 pod_ready.go:39] duration metric: took 3.405043457s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 10:24:21.884636    4656 api_server.go:52] waiting for apiserver process to appear ...
	I0816 10:24:21.884692    4656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 10:24:21.896175    4656 api_server.go:72] duration metric: took 12.763101701s to wait for apiserver process to appear ...
	I0816 10:24:21.896187    4656 api_server.go:88] waiting for apiserver healthz status ...
	I0816 10:24:21.896203    4656 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0816 10:24:21.900677    4656 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0816 10:24:21.900711    4656 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0816 10:24:21.900715    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:21.900720    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:21.900725    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:21.901496    4656 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0816 10:24:21.901599    4656 api_server.go:141] control plane version: v1.31.0
	I0816 10:24:21.901609    4656 api_server.go:131] duration metric: took 5.41777ms to wait for apiserver health ...
	I0816 10:24:21.901617    4656 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 10:24:22.081425    4656 request.go:632] Waited for 179.775499ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0816 10:24:22.081509    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0816 10:24:22.081521    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:22.081533    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:22.081542    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:22.087308    4656 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0816 10:24:22.090908    4656 system_pods.go:59] 19 kube-system pods found
	I0816 10:24:22.090924    4656 system_pods.go:61] "coredns-6f6b679f8f-2kqjf" [be18cd09-3e3b-4749-bf29-7001a879f593] Running
	I0816 10:24:22.090929    4656 system_pods.go:61] "coredns-6f6b679f8f-rfbz7" [a593e27a-e38b-46c7-a603-44963c31c095] Running
	I0816 10:24:22.090932    4656 system_pods.go:61] "etcd-ha-286000" [c45bc623-befe-4e88-8da1-bb4f7d7be5b2] Running
	I0816 10:24:22.090935    4656 system_pods.go:61] "etcd-ha-286000-m02" [e14fbe0c-4527-4cbb-8c13-01c0b32a8d15] Running
	I0816 10:24:22.090938    4656 system_pods.go:61] "kindnet-b9r6s" [07db3ed9-4355-45a2-be01-15273d773c65] Running
	I0816 10:24:22.090940    4656 system_pods.go:61] "kindnet-t9kjf" [20c57f28-5e86-44a4-8a2a-07e1e240abf2] Running
	I0816 10:24:22.090943    4656 system_pods.go:61] "kindnet-whqxb" [1cc5291b-52ea-44b5-b87c-607d46b5281a] Running
	I0816 10:24:22.090946    4656 system_pods.go:61] "kube-apiserver-ha-286000" [c0d298a9-931b-4084-9e5f-01a88de27456] Running
	I0816 10:24:22.090949    4656 system_pods.go:61] "kube-apiserver-ha-286000-m02" [3666c131-2b88-483a-ab8c-b18cb8db7d6f] Running
	I0816 10:24:22.090952    4656 system_pods.go:61] "kube-controller-manager-ha-286000" [5a4f4c5d-d89a-4e5f-b022-479ca31f64cd] Running
	I0816 10:24:22.090954    4656 system_pods.go:61] "kube-controller-manager-ha-286000-m02" [a347c3d4-fa47-49ea-93a0-83087017a2bd] Running
	I0816 10:24:22.090957    4656 system_pods.go:61] "kube-proxy-5qhgk" [da0cb383-9e0b-44cc-9f79-7861029514f7] Running
	I0816 10:24:22.090959    4656 system_pods.go:61] "kube-proxy-pt669" [798c956f-8f08-465a-b0d9-90b65f703f80] Running
	I0816 10:24:22.090962    4656 system_pods.go:61] "kube-proxy-w4nt2" [79ce2248-f8fd-4a3b-aeb7-f81d7ad64564] Running
	I0816 10:24:22.090967    4656 system_pods.go:61] "kube-scheduler-ha-286000" [b4af80e0-cbb0-4257-a212-3585faff0875] Running
	I0816 10:24:22.090971    4656 system_pods.go:61] "kube-scheduler-ha-286000-m02" [89920e93-c959-47ec-8a2c-f2b63e8c3d3a] Running
	I0816 10:24:22.090973    4656 system_pods.go:61] "kube-vip-ha-286000" [b0f85be2-2afb-44b0-a701-161b24529349] Running
	I0816 10:24:22.090976    4656 system_pods.go:61] "kube-vip-ha-286000-m02" [4982dc53-946d-4f75-8e9e-452ef39ff2fa] Running
	I0816 10:24:22.090978    4656 system_pods.go:61] "storage-provisioner" [4805d53b-2db3-4092-a3f2-d4a854e93adc] Running
	I0816 10:24:22.090983    4656 system_pods.go:74] duration metric: took 189.374292ms to wait for pod list to return data ...
	I0816 10:24:22.090989    4656 default_sa.go:34] waiting for default service account to be created ...
	I0816 10:24:22.280932    4656 request.go:632] Waited for 189.91131ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0816 10:24:22.280992    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0816 10:24:22.280998    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:22.281004    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:22.281007    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:22.286126    4656 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0816 10:24:22.286303    4656 default_sa.go:45] found service account: "default"
	I0816 10:24:22.286313    4656 default_sa.go:55] duration metric: took 195.332329ms for default service account to be created ...
	I0816 10:24:22.286320    4656 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 10:24:22.480087    4656 request.go:632] Waited for 193.706904ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0816 10:24:22.480149    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0816 10:24:22.480160    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:22.480172    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:22.480181    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:22.486391    4656 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0816 10:24:22.490416    4656 system_pods.go:86] 19 kube-system pods found
	I0816 10:24:22.490428    4656 system_pods.go:89] "coredns-6f6b679f8f-2kqjf" [be18cd09-3e3b-4749-bf29-7001a879f593] Running
	I0816 10:24:22.490432    4656 system_pods.go:89] "coredns-6f6b679f8f-rfbz7" [a593e27a-e38b-46c7-a603-44963c31c095] Running
	I0816 10:24:22.490435    4656 system_pods.go:89] "etcd-ha-286000" [c45bc623-befe-4e88-8da1-bb4f7d7be5b2] Running
	I0816 10:24:22.490443    4656 system_pods.go:89] "etcd-ha-286000-m02" [e14fbe0c-4527-4cbb-8c13-01c0b32a8d15] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0816 10:24:22.490447    4656 system_pods.go:89] "kindnet-b9r6s" [07db3ed9-4355-45a2-be01-15273d773c65] Running
	I0816 10:24:22.490454    4656 system_pods.go:89] "kindnet-t9kjf" [20c57f28-5e86-44a4-8a2a-07e1e240abf2] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0816 10:24:22.490458    4656 system_pods.go:89] "kindnet-whqxb" [1cc5291b-52ea-44b5-b87c-607d46b5281a] Running
	I0816 10:24:22.490462    4656 system_pods.go:89] "kube-apiserver-ha-286000" [c0d298a9-931b-4084-9e5f-01a88de27456] Running
	I0816 10:24:22.490466    4656 system_pods.go:89] "kube-apiserver-ha-286000-m02" [3666c131-2b88-483a-ab8c-b18cb8db7d6f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0816 10:24:22.490469    4656 system_pods.go:89] "kube-controller-manager-ha-286000" [5a4f4c5d-d89a-4e5f-b022-479ca31f64cd] Running
	I0816 10:24:22.490478    4656 system_pods.go:89] "kube-controller-manager-ha-286000-m02" [a347c3d4-fa47-49ea-93a0-83087017a2bd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0816 10:24:22.490483    4656 system_pods.go:89] "kube-proxy-5qhgk" [da0cb383-9e0b-44cc-9f79-7861029514f7] Running
	I0816 10:24:22.490487    4656 system_pods.go:89] "kube-proxy-pt669" [798c956f-8f08-465a-b0d9-90b65f703f80] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0816 10:24:22.490496    4656 system_pods.go:89] "kube-proxy-w4nt2" [79ce2248-f8fd-4a3b-aeb7-f81d7ad64564] Running
	I0816 10:24:22.490499    4656 system_pods.go:89] "kube-scheduler-ha-286000" [b4af80e0-cbb0-4257-a212-3585faff0875] Running
	I0816 10:24:22.490503    4656 system_pods.go:89] "kube-scheduler-ha-286000-m02" [89920e93-c959-47ec-8a2c-f2b63e8c3d3a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 10:24:22.490507    4656 system_pods.go:89] "kube-vip-ha-286000" [b0f85be2-2afb-44b0-a701-161b24529349] Running
	I0816 10:24:22.490511    4656 system_pods.go:89] "kube-vip-ha-286000-m02" [4982dc53-946d-4f75-8e9e-452ef39ff2fa] Running
	I0816 10:24:22.490514    4656 system_pods.go:89] "storage-provisioner" [4805d53b-2db3-4092-a3f2-d4a854e93adc] Running
	I0816 10:24:22.490518    4656 system_pods.go:126] duration metric: took 204.207739ms to wait for k8s-apps to be running ...
	I0816 10:24:22.490523    4656 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 10:24:22.490574    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 10:24:22.501971    4656 system_svc.go:56] duration metric: took 11.445041ms WaitForService to wait for kubelet
	I0816 10:24:22.501986    4656 kubeadm.go:582] duration metric: took 13.368953512s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 10:24:22.501997    4656 node_conditions.go:102] verifying NodePressure condition ...
	I0816 10:24:22.681633    4656 request.go:632] Waited for 179.608953ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0816 10:24:22.681696    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0816 10:24:22.681702    4656 round_trippers.go:469] Request Headers:
	I0816 10:24:22.681708    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:24:22.681744    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:24:22.684771    4656 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 10:24:22.685508    4656 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 10:24:22.685523    4656 node_conditions.go:123] node cpu capacity is 2
	I0816 10:24:22.685532    4656 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 10:24:22.685535    4656 node_conditions.go:123] node cpu capacity is 2
	I0816 10:24:22.685538    4656 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 10:24:22.685541    4656 node_conditions.go:123] node cpu capacity is 2
	I0816 10:24:22.685544    4656 node_conditions.go:105] duration metric: took 183.55481ms to run NodePressure ...
	I0816 10:24:22.685552    4656 start.go:241] waiting for startup goroutines ...
	I0816 10:24:22.685571    4656 start.go:255] writing updated cluster config ...
	I0816 10:24:22.707964    4656 out.go:201] 
	I0816 10:24:22.729754    4656 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:24:22.729889    4656 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:24:22.752182    4656 out.go:177] * Starting "ha-286000-m03" control-plane node in "ha-286000" cluster
	I0816 10:24:22.794355    4656 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 10:24:22.794388    4656 cache.go:56] Caching tarball of preloaded images
	I0816 10:24:22.794595    4656 preload.go:172] Found /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0816 10:24:22.794623    4656 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 10:24:22.794796    4656 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:24:22.870926    4656 start.go:360] acquireMachinesLock for ha-286000-m03: {Name:mk0510f0432b1e24fd07d899155e85dff8756d5a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:24:22.871061    4656 start.go:364] duration metric: took 106.312µs to acquireMachinesLock for "ha-286000-m03"
	I0816 10:24:22.871092    4656 start.go:96] Skipping create...Using existing machine configuration
	I0816 10:24:22.871102    4656 fix.go:54] fixHost starting: m03
	I0816 10:24:22.871530    4656 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:24:22.871567    4656 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:24:22.881793    4656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52214
	I0816 10:24:22.882176    4656 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:24:22.882559    4656 main.go:141] libmachine: Using API Version  1
	I0816 10:24:22.882581    4656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:24:22.882800    4656 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:24:22.882926    4656 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:24:22.883020    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetState
	I0816 10:24:22.883103    4656 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:24:22.883215    4656 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid from json: 3849
	I0816 10:24:22.884141    4656 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid 3849 missing from process table
	I0816 10:24:22.884173    4656 fix.go:112] recreateIfNeeded on ha-286000-m03: state=Stopped err=<nil>
	I0816 10:24:22.884183    4656 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	W0816 10:24:22.884273    4656 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 10:24:22.934970    4656 out.go:177] * Restarting existing hyperkit VM for "ha-286000-m03" ...
	I0816 10:24:22.989195    4656 main.go:141] libmachine: (ha-286000-m03) Calling .Start
	I0816 10:24:22.989384    4656 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:24:22.989428    4656 main.go:141] libmachine: (ha-286000-m03) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/hyperkit.pid
	I0816 10:24:22.990416    4656 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid 3849 missing from process table
	I0816 10:24:22.990433    4656 main.go:141] libmachine: (ha-286000-m03) DBG | pid 3849 is in state "Stopped"
	I0816 10:24:22.990450    4656 main.go:141] libmachine: (ha-286000-m03) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/hyperkit.pid...
	I0816 10:24:22.991046    4656 main.go:141] libmachine: (ha-286000-m03) DBG | Using UUID 408efb18-ff91-428f-b414-6eb0d982a779
	I0816 10:24:23.018344    4656 main.go:141] libmachine: (ha-286000-m03) DBG | Generated MAC 8a:e:de:5b:b5:8b
	I0816 10:24:23.018367    4656 main.go:141] libmachine: (ha-286000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000
	I0816 10:24:23.018512    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:23 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"408efb18-ff91-428f-b414-6eb0d982a779", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003acb40)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 10:24:23.018542    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:23 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"408efb18-ff91-428f-b414-6eb0d982a779", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003acb40)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 10:24:23.018607    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:23 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "408efb18-ff91-428f-b414-6eb0d982a779", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/ha-286000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/bzimage,/Users/jenkins/minikube-integration/19461-1276/.minikube/machine
s/ha-286000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000"}
	I0816 10:24:23.018646    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:23 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 408efb18-ff91-428f-b414-6eb0d982a779 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/ha-286000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/tty,log=/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/bzimage,/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-286000"
	I0816 10:24:23.018659    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:23 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0816 10:24:23.019982    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:23 DEBUG: hyperkit: Pid is 4694
	I0816 10:24:23.020375    4656 main.go:141] libmachine: (ha-286000-m03) DBG | Attempt 0
	I0816 10:24:23.020392    4656 main.go:141] libmachine: (ha-286000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:24:23.020487    4656 main.go:141] libmachine: (ha-286000-m03) DBG | hyperkit pid from json: 4694
	I0816 10:24:23.022453    4656 main.go:141] libmachine: (ha-286000-m03) DBG | Searching for 8a:e:de:5b:b5:8b in /var/db/dhcpd_leases ...
	I0816 10:24:23.022498    4656 main.go:141] libmachine: (ha-286000-m03) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0816 10:24:23.022517    4656 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:72:69:8f:11:68:1d ID:1,72:69:8f:11:68:1d Lease:0x66c0dcaf}
	I0816 10:24:23.022531    4656 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:66:c8:48:4e:12:1b ID:1,66:c8:48:4e:12:1b Lease:0x66c0dc9d}
	I0816 10:24:23.022542    4656 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:a:ab:1f:8:77:9a ID:1,a:ab:1f:8:77:9a Lease:0x66c0db17}
	I0816 10:24:23.022552    4656 main.go:141] libmachine: (ha-286000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:e:de:5b:b5:8b ID:1,8a:e:de:5b:b5:8b Lease:0x66c0d7f9}
	I0816 10:24:23.022566    4656 main.go:141] libmachine: (ha-286000-m03) DBG | Found match: 8a:e:de:5b:b5:8b
	I0816 10:24:23.022574    4656 main.go:141] libmachine: (ha-286000-m03) DBG | IP: 192.169.0.7
	I0816 10:24:23.022592    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetConfigRaw
	I0816 10:24:23.023252    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetIP
	I0816 10:24:23.023444    4656 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/config.json ...
	I0816 10:24:23.023931    4656 machine.go:93] provisionDockerMachine start ...
	I0816 10:24:23.023941    4656 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:24:23.024079    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:24:23.024190    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:24:23.024302    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:24:23.024432    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:24:23.024554    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:24:23.024692    4656 main.go:141] libmachine: Using SSH client type: native
	I0816 10:24:23.024832    4656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e87ea0] 0x3e8ac00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:24:23.024839    4656 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 10:24:23.028441    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:23 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0816 10:24:23.037003    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:23 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0816 10:24:23.038503    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:24:23.038539    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:24:23.038554    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:24:23.038589    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:24:23.422756    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:23 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0816 10:24:23.422770    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:23 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0816 10:24:23.537534    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 10:24:23.537554    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 10:24:23.537563    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 10:24:23.537570    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 10:24:23.538449    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:23 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0816 10:24:23.538460    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:23 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0816 10:24:29.168490    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:29 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0816 10:24:29.168581    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:29 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0816 10:24:29.168594    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:29 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0816 10:24:29.192004    4656 main.go:141] libmachine: (ha-286000-m03) DBG | 2024/08/16 10:24:29 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0816 10:24:58.091940    4656 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 10:24:58.091955    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetMachineName
	I0816 10:24:58.092103    4656 buildroot.go:166] provisioning hostname "ha-286000-m03"
	I0816 10:24:58.092114    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetMachineName
	I0816 10:24:58.092224    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:24:58.092330    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:24:58.092419    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:24:58.092518    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:24:58.092626    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:24:58.092758    4656 main.go:141] libmachine: Using SSH client type: native
	I0816 10:24:58.092916    4656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e87ea0] 0x3e8ac00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:24:58.092925    4656 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-286000-m03 && echo "ha-286000-m03" | sudo tee /etc/hostname
	I0816 10:24:58.165459    4656 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-286000-m03
	
	I0816 10:24:58.165475    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:24:58.165609    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:24:58.165705    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:24:58.165800    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:24:58.165888    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:24:58.166012    4656 main.go:141] libmachine: Using SSH client type: native
	I0816 10:24:58.166160    4656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e87ea0] 0x3e8ac00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:24:58.166171    4656 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-286000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-286000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-286000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 10:24:58.234524    4656 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 10:24:58.234539    4656 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19461-1276/.minikube CaCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19461-1276/.minikube}
	I0816 10:24:58.234548    4656 buildroot.go:174] setting up certificates
	I0816 10:24:58.234555    4656 provision.go:84] configureAuth start
	I0816 10:24:58.234562    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetMachineName
	I0816 10:24:58.234691    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetIP
	I0816 10:24:58.234792    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:24:58.234865    4656 provision.go:143] copyHostCerts
	I0816 10:24:58.234895    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem
	I0816 10:24:58.234961    4656 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem, removing ...
	I0816 10:24:58.234967    4656 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem
	I0816 10:24:58.235111    4656 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.pem (1078 bytes)
	I0816 10:24:58.235314    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem
	I0816 10:24:58.235356    4656 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem, removing ...
	I0816 10:24:58.235361    4656 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem
	I0816 10:24:58.235442    4656 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/cert.pem (1123 bytes)
	I0816 10:24:58.235582    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem
	I0816 10:24:58.235624    4656 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem, removing ...
	I0816 10:24:58.235629    4656 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem
	I0816 10:24:58.235704    4656 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19461-1276/.minikube/key.pem (1679 bytes)
	I0816 10:24:58.235845    4656 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem org=jenkins.ha-286000-m03 san=[127.0.0.1 192.169.0.7 ha-286000-m03 localhost minikube]
	I0816 10:24:58.291944    4656 provision.go:177] copyRemoteCerts
	I0816 10:24:58.291996    4656 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 10:24:58.292012    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:24:58.292152    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:24:58.292249    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:24:58.292325    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:24:58.292403    4656 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/id_rsa Username:docker}
	I0816 10:24:58.328961    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0816 10:24:58.329060    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0816 10:24:58.348824    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0816 10:24:58.348900    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0816 10:24:58.369137    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0816 10:24:58.369210    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 10:24:58.388899    4656 provision.go:87] duration metric: took 154.336521ms to configureAuth
	I0816 10:24:58.388918    4656 buildroot.go:189] setting minikube options for container-runtime
	I0816 10:24:58.389098    4656 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:24:58.389135    4656 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:24:58.389270    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:24:58.389362    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:24:58.389460    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:24:58.389543    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:24:58.389622    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:24:58.389731    4656 main.go:141] libmachine: Using SSH client type: native
	I0816 10:24:58.389859    4656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e87ea0] 0x3e8ac00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:24:58.389867    4656 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0816 10:24:58.452406    4656 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0816 10:24:58.452425    4656 buildroot.go:70] root file system type: tmpfs
	I0816 10:24:58.452504    4656 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0816 10:24:58.452516    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:24:58.452651    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:24:58.452745    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:24:58.452844    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:24:58.452943    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:24:58.453082    4656 main.go:141] libmachine: Using SSH client type: native
	I0816 10:24:58.453228    4656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e87ea0] 0x3e8ac00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:24:58.453271    4656 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0816 10:24:58.524937    4656 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0816 10:24:58.524958    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:24:58.525096    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:24:58.525191    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:24:58.525277    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:24:58.525354    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:24:58.525485    4656 main.go:141] libmachine: Using SSH client type: native
	I0816 10:24:58.525630    4656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e87ea0] 0x3e8ac00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:24:58.525643    4656 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0816 10:25:00.070144    4656 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0816 10:25:00.070159    4656 machine.go:96] duration metric: took 37.04784939s to provisionDockerMachine
	I0816 10:25:00.070167    4656 start.go:293] postStartSetup for "ha-286000-m03" (driver="hyperkit")
	I0816 10:25:00.070174    4656 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 10:25:00.070189    4656 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:25:00.070367    4656 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 10:25:00.070380    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:25:00.070472    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:25:00.070550    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:25:00.070650    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:25:00.070738    4656 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/id_rsa Username:docker}
	I0816 10:25:00.107373    4656 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 10:25:00.110616    4656 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 10:25:00.110628    4656 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19461-1276/.minikube/addons for local assets ...
	I0816 10:25:00.110727    4656 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19461-1276/.minikube/files for local assets ...
	I0816 10:25:00.110900    4656 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> 18312.pem in /etc/ssl/certs
	I0816 10:25:00.110906    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> /etc/ssl/certs/18312.pem
	I0816 10:25:00.111116    4656 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 10:25:00.118270    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem --> /etc/ssl/certs/18312.pem (1708 bytes)
	I0816 10:25:00.138002    4656 start.go:296] duration metric: took 67.828962ms for postStartSetup
	I0816 10:25:00.138023    4656 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:25:00.138205    4656 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0816 10:25:00.138223    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:25:00.138316    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:25:00.138399    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:25:00.138484    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:25:00.138558    4656 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/id_rsa Username:docker}
	I0816 10:25:00.176923    4656 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0816 10:25:00.176990    4656 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0816 10:25:00.228121    4656 fix.go:56] duration metric: took 37.358659467s for fixHost
	I0816 10:25:00.228163    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:25:00.228436    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:25:00.228658    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:25:00.228845    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:25:00.229035    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:25:00.229265    4656 main.go:141] libmachine: Using SSH client type: native
	I0816 10:25:00.229477    4656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e87ea0] 0x3e8ac00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0816 10:25:00.229490    4656 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 10:25:00.290756    4656 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723829100.434156000
	
	I0816 10:25:00.290771    4656 fix.go:216] guest clock: 1723829100.434156000
	I0816 10:25:00.290778    4656 fix.go:229] Guest: 2024-08-16 10:25:00.434156 -0700 PDT Remote: 2024-08-16 10:25:00.228148 -0700 PDT m=+88.850268934 (delta=206.008ms)
	I0816 10:25:00.290788    4656 fix.go:200] guest clock delta is within tolerance: 206.008ms
	I0816 10:25:00.290792    4656 start.go:83] releasing machines lock for "ha-286000-m03", held for 37.421364862s
	I0816 10:25:00.290808    4656 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:25:00.290938    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetIP
	I0816 10:25:00.313666    4656 out.go:177] * Found network options:
	I0816 10:25:00.334418    4656 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6
	W0816 10:25:00.355435    4656 proxy.go:119] fail to check proxy env: Error ip not in block
	W0816 10:25:00.355461    4656 proxy.go:119] fail to check proxy env: Error ip not in block
	I0816 10:25:00.355478    4656 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:25:00.356143    4656 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:25:00.356356    4656 main.go:141] libmachine: (ha-286000-m03) Calling .DriverName
	I0816 10:25:00.356474    4656 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 10:25:00.356513    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	W0816 10:25:00.356569    4656 proxy.go:119] fail to check proxy env: Error ip not in block
	W0816 10:25:00.356590    4656 proxy.go:119] fail to check proxy env: Error ip not in block
	I0816 10:25:00.356679    4656 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0816 10:25:00.356698    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHHostname
	I0816 10:25:00.356711    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:25:00.356905    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:25:00.356940    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHPort
	I0816 10:25:00.357121    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:25:00.357153    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHKeyPath
	I0816 10:25:00.357335    4656 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/id_rsa Username:docker}
	I0816 10:25:00.357342    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetSSHUsername
	I0816 10:25:00.357519    4656 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000-m03/id_rsa Username:docker}
	W0816 10:25:00.391006    4656 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 10:25:00.391060    4656 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 10:25:00.439137    4656 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 10:25:00.439154    4656 start.go:495] detecting cgroup driver to use...
	I0816 10:25:00.439231    4656 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 10:25:00.454661    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0816 10:25:00.463185    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0816 10:25:00.471601    4656 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0816 10:25:00.471658    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0816 10:25:00.480421    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 10:25:00.488812    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0816 10:25:00.497664    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 10:25:00.506080    4656 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 10:25:00.514726    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0816 10:25:00.523293    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0816 10:25:00.531650    4656 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0816 10:25:00.540020    4656 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 10:25:00.547503    4656 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 10:25:00.555089    4656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:25:00.643202    4656 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0816 10:25:00.663102    4656 start.go:495] detecting cgroup driver to use...
	I0816 10:25:00.663170    4656 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0816 10:25:00.680492    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 10:25:00.693170    4656 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 10:25:00.707541    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 10:25:00.718044    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 10:25:00.728609    4656 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0816 10:25:00.747431    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 10:25:00.757669    4656 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 10:25:00.772722    4656 ssh_runner.go:195] Run: which cri-dockerd
	I0816 10:25:00.775964    4656 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0816 10:25:00.783500    4656 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0816 10:25:00.797291    4656 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0816 10:25:00.889940    4656 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0816 10:25:00.996518    4656 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0816 10:25:00.996540    4656 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0816 10:25:01.010228    4656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:25:01.104164    4656 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0816 10:25:03.365849    4656 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.261743451s)
	I0816 10:25:03.365910    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0816 10:25:03.376096    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0816 10:25:03.386222    4656 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0816 10:25:03.479109    4656 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0816 10:25:03.594325    4656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:25:03.706928    4656 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0816 10:25:03.721224    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0816 10:25:03.732283    4656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:25:03.827894    4656 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0816 10:25:03.888066    4656 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0816 10:25:03.888145    4656 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0816 10:25:03.893520    4656 start.go:563] Will wait 60s for crictl version
	I0816 10:25:03.893575    4656 ssh_runner.go:195] Run: which crictl
	I0816 10:25:03.896917    4656 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 10:25:03.925631    4656 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0816 10:25:03.925712    4656 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0816 10:25:03.944598    4656 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0816 10:25:03.985082    4656 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0816 10:25:04.029274    4656 out.go:177]   - env NO_PROXY=192.169.0.5
	I0816 10:25:04.051107    4656 out.go:177]   - env NO_PROXY=192.169.0.5,192.169.0.6
	I0816 10:25:04.072084    4656 main.go:141] libmachine: (ha-286000-m03) Calling .GetIP
	I0816 10:25:04.072364    4656 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0816 10:25:04.075855    4656 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 10:25:04.085745    4656 mustload.go:65] Loading cluster: ha-286000
	I0816 10:25:04.085928    4656 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:25:04.086156    4656 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:25:04.086178    4656 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:25:04.095096    4656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52236
	I0816 10:25:04.095437    4656 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:25:04.095780    4656 main.go:141] libmachine: Using API Version  1
	I0816 10:25:04.095794    4656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:25:04.095992    4656 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:25:04.096098    4656 main.go:141] libmachine: (ha-286000) Calling .GetState
	I0816 10:25:04.096178    4656 main.go:141] libmachine: (ha-286000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:25:04.096257    4656 main.go:141] libmachine: (ha-286000) DBG | hyperkit pid from json: 4669
	I0816 10:25:04.097216    4656 host.go:66] Checking if "ha-286000" exists ...
	I0816 10:25:04.097478    4656 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:25:04.097503    4656 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:25:04.106283    4656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52238
	I0816 10:25:04.106623    4656 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:25:04.106944    4656 main.go:141] libmachine: Using API Version  1
	I0816 10:25:04.106954    4656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:25:04.107151    4656 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:25:04.107299    4656 main.go:141] libmachine: (ha-286000) Calling .DriverName
	I0816 10:25:04.107413    4656 certs.go:68] Setting up /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000 for IP: 192.169.0.7
	I0816 10:25:04.107420    4656 certs.go:194] generating shared ca certs ...
	I0816 10:25:04.107432    4656 certs.go:226] acquiring lock for ca certs: {Name:mka549fbc467849834d2267da204028c49d88a3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:25:04.107603    4656 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key
	I0816 10:25:04.107673    4656 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key
	I0816 10:25:04.107682    4656 certs.go:256] generating profile certs ...
	I0816 10:25:04.107801    4656 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.key
	I0816 10:25:04.107821    4656 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.fbb2b423
	I0816 10:25:04.107836    4656 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.fbb2b423 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.7 192.169.0.254]
	I0816 10:25:04.288936    4656 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.fbb2b423 ...
	I0816 10:25:04.288952    4656 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.fbb2b423: {Name:mk5b5d381df2e0229dfa97b94f9501ac61e1f4af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:25:04.289301    4656 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.fbb2b423 ...
	I0816 10:25:04.289309    4656 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.fbb2b423: {Name:mk1c231c3478673ccffbd14f4f0c5e31373f1228 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:25:04.289510    4656 certs.go:381] copying /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt.fbb2b423 -> /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt
	I0816 10:25:04.289730    4656 certs.go:385] copying /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key.fbb2b423 -> /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key
	I0816 10:25:04.289982    4656 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key
	I0816 10:25:04.289991    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0816 10:25:04.290020    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0816 10:25:04.290039    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0816 10:25:04.290058    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0816 10:25:04.290076    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0816 10:25:04.290101    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0816 10:25:04.290120    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0816 10:25:04.290144    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0816 10:25:04.290239    4656 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem (1338 bytes)
	W0816 10:25:04.290288    4656 certs.go:480] ignoring /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831_empty.pem, impossibly tiny 0 bytes
	I0816 10:25:04.290297    4656 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 10:25:04.290334    4656 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/ca.pem (1078 bytes)
	I0816 10:25:04.290369    4656 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/cert.pem (1123 bytes)
	I0816 10:25:04.290397    4656 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/key.pem (1679 bytes)
	I0816 10:25:04.290469    4656 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem (1708 bytes)
	I0816 10:25:04.290504    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem -> /usr/share/ca-certificates/1831.pem
	I0816 10:25:04.290530    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem -> /usr/share/ca-certificates/18312.pem
	I0816 10:25:04.290551    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:25:04.290581    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHHostname
	I0816 10:25:04.290714    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHPort
	I0816 10:25:04.290801    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHKeyPath
	I0816 10:25:04.290889    4656 main.go:141] libmachine: (ha-286000) Calling .GetSSHUsername
	I0816 10:25:04.290979    4656 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/ha-286000/id_rsa Username:docker}
	I0816 10:25:04.320175    4656 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0816 10:25:04.323948    4656 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0816 10:25:04.332572    4656 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0816 10:25:04.335881    4656 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0816 10:25:04.344208    4656 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0816 10:25:04.347261    4656 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0816 10:25:04.355353    4656 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0816 10:25:04.358754    4656 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0816 10:25:04.367226    4656 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0816 10:25:04.370644    4656 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0816 10:25:04.379014    4656 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0816 10:25:04.382464    4656 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0816 10:25:04.390940    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 10:25:04.411283    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0816 10:25:04.431206    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 10:25:04.451054    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 10:25:04.470415    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0816 10:25:04.490122    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 10:25:04.509717    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 10:25:04.529383    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 10:25:04.549154    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/certs/1831.pem --> /usr/share/ca-certificates/1831.pem (1338 bytes)
	I0816 10:25:04.568985    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/ssl/certs/18312.pem --> /usr/share/ca-certificates/18312.pem (1708 bytes)
	I0816 10:25:04.588519    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 10:25:04.607970    4656 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0816 10:25:04.621401    4656 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0816 10:25:04.635625    4656 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0816 10:25:04.649570    4656 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0816 10:25:04.663171    4656 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0816 10:25:04.676495    4656 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0816 10:25:04.690056    4656 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0816 10:25:04.703786    4656 ssh_runner.go:195] Run: openssl version
	I0816 10:25:04.707923    4656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1831.pem && ln -fs /usr/share/ca-certificates/1831.pem /etc/ssl/certs/1831.pem"
	I0816 10:25:04.716268    4656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1831.pem
	I0816 10:25:04.719659    4656 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 16:57 /usr/share/ca-certificates/1831.pem
	I0816 10:25:04.719702    4656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1831.pem
	I0816 10:25:04.723849    4656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1831.pem /etc/ssl/certs/51391683.0"
	I0816 10:25:04.732246    4656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18312.pem && ln -fs /usr/share/ca-certificates/18312.pem /etc/ssl/certs/18312.pem"
	I0816 10:25:04.740650    4656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18312.pem
	I0816 10:25:04.743948    4656 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 16:57 /usr/share/ca-certificates/18312.pem
	I0816 10:25:04.743983    4656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18312.pem
	I0816 10:25:04.748103    4656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18312.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 10:25:04.756745    4656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 10:25:04.765039    4656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:25:04.768354    4656 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 16:48 /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:25:04.768417    4656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:25:04.772556    4656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 10:25:04.781063    4656 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 10:25:04.784249    4656 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0816 10:25:04.784287    4656 kubeadm.go:934] updating node {m03 192.169.0.7 8443 v1.31.0 docker true true} ...
	I0816 10:25:04.784343    4656 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-286000-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-286000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 10:25:04.784359    4656 kube-vip.go:115] generating kube-vip config ...
	I0816 10:25:04.784396    4656 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0816 10:25:04.796986    4656 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0816 10:25:04.797028    4656 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0816 10:25:04.797080    4656 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 10:25:04.805783    4656 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0816 10:25:04.805828    4656 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0816 10:25:04.815857    4656 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0816 10:25:04.815860    4656 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256
	I0816 10:25:04.815857    4656 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256
	I0816 10:25:04.815875    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0816 10:25:04.815878    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0816 10:25:04.815911    4656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 10:25:04.815963    4656 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0816 10:25:04.815967    4656 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0816 10:25:04.819783    4656 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0816 10:25:04.819808    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0816 10:25:04.819886    4656 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0816 10:25:04.819905    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0816 10:25:04.838560    4656 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0816 10:25:04.838690    4656 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0816 10:25:04.892677    4656 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0816 10:25:04.892722    4656 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0816 10:25:05.452270    4656 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0816 10:25:05.460515    4656 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0816 10:25:05.473974    4656 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 10:25:05.487288    4656 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0816 10:25:05.501421    4656 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0816 10:25:05.504340    4656 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 10:25:05.514511    4656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:25:05.610695    4656 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 10:25:05.627113    4656 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:25:05.627365    4656 config.go:182] Loaded profile config "ha-286000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:25:05.650018    4656 out.go:177] * Verifying Kubernetes components...
	I0816 10:25:05.671252    4656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:25:05.770878    4656 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 10:25:06.484588    4656 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19461-1276/kubeconfig
	I0816 10:25:06.484787    4656 kapi.go:59] client config for ha-286000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/ha-286000/client.key", CAFile:"/Users/jenkins/minikube-integration/19461-1276/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x5540f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0816 10:25:06.484828    4656 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0816 10:25:06.484987    4656 node_ready.go:35] waiting up to 6m0s for node "ha-286000-m03" to be "Ready" ...
	I0816 10:25:06.485034    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:06.485039    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:06.485045    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:06.485048    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:06.487783    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:06.985311    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:06.985336    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:06.985348    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:06.985354    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:06.989349    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:07.485490    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:07.485513    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:07.485524    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:07.485529    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:07.489016    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:07.985178    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:07.985193    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:07.985199    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:07.985202    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:07.987679    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:08.487278    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:08.487300    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:08.487309    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:08.487315    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:08.491486    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:25:08.491567    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:08.987160    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:08.987184    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:08.987194    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:08.987200    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:08.990942    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:09.485053    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:09.485101    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:09.485109    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:09.485113    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:09.487562    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:09.985592    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:09.985671    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:09.985687    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:09.985696    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:09.989637    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:10.486025    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:10.486050    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:10.486061    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:10.486067    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:10.489557    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:10.985086    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:10.985127    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:10.985134    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:10.985139    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:10.987914    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:10.987975    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:11.485153    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:11.485176    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:11.485186    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:11.485193    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:11.488752    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:11.986139    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:11.986154    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:11.986162    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:11.986166    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:11.989386    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:12.485803    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:12.485849    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:12.485865    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:12.485870    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:12.489472    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:12.986570    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:12.986596    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:12.986607    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:12.986612    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:12.990236    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:12.990376    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:13.484914    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:13.484926    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:13.484932    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:13.484935    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:13.488977    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:25:13.986680    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:13.986696    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:13.986702    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:13.986705    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:13.989158    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:14.486321    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:14.486382    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:14.486402    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:14.486412    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:14.491203    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:25:14.985877    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:14.985901    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:14.985912    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:14.985949    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:14.989703    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:15.485277    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:15.485292    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:15.485299    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:15.485302    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:15.487768    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:15.487830    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:15.985642    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:15.985663    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:15.985675    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:15.985680    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:15.989433    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:16.484901    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:16.484927    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:16.484939    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:16.484944    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:16.488779    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:16.986034    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:16.986047    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:16.986054    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:16.986062    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:16.988709    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:17.486864    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:17.486887    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:17.486924    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:17.486931    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:17.490473    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:17.490551    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:17.985889    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:17.985909    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:17.985921    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:17.985925    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:17.989836    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:18.485398    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:18.485414    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:18.485421    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:18.485425    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:18.487889    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:18.985349    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:18.985378    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:18.985436    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:18.985442    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:18.988422    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:19.485081    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:19.485102    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:19.485113    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:19.485121    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:19.488852    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:19.985049    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:19.985062    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:19.985081    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:19.985085    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:19.987210    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:19.987270    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:20.484914    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:20.484939    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:20.484949    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:20.484954    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:20.488695    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:20.985203    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:20.985229    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:20.985239    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:20.985245    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:20.989283    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:25:21.484963    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:21.484979    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:21.484985    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:21.484989    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:21.487275    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:21.985755    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:21.985782    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:21.985793    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:21.985798    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:21.989914    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:25:21.989997    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:22.485717    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:22.485745    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:22.485824    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:22.485835    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:22.489667    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:22.985286    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:22.985301    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:22.985307    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:22.985318    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:22.987903    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:23.485546    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:23.485567    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:23.485578    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:23.485583    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:23.489380    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:23.985686    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:23.985757    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:23.985777    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:23.985792    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:23.989466    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:24.484557    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:24.484568    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:24.484575    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:24.484578    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:24.487089    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:24.487151    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:24.985579    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:24.985600    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:24.985609    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:24.985614    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:24.989536    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:25.485541    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:25.485564    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:25.485576    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:25.485583    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:25.489272    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:25.984513    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:25.984529    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:25.984536    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:25.984540    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:25.987095    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:26.486003    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:26.486022    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:26.486034    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:26.486043    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:26.489357    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:26.489445    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:26.985326    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:26.985345    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:26.985357    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:26.985363    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:26.988993    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:27.484603    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:27.484616    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:27.484621    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:27.484625    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:27.486943    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:27.984825    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:27.984844    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:27.984855    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:27.984861    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:27.988691    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:28.486230    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:28.486245    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:28.486253    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:28.486259    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:28.491735    4656 round_trippers.go:574] Response Status: 404 Not Found in 5 milliseconds
	I0816 10:25:28.491792    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:28.985268    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:28.985287    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:28.985315    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:28.985319    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:28.987718    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:29.485335    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:29.485355    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:29.485367    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:29.485372    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:29.488781    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:29.984712    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:29.984727    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:29.984736    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:29.984740    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:29.987128    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:30.484437    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:30.484448    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:30.484454    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:30.484457    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:30.487047    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:30.984627    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:30.984648    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:30.984659    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:30.984665    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:30.988084    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:30.988236    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:31.486364    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:31.486416    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:31.486431    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:31.486464    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:31.489760    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:31.985027    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:31.985041    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:31.985048    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:31.985052    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:31.987323    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:32.486368    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:32.486394    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:32.486407    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:32.486413    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:32.490571    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:25:32.984941    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:32.984966    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:32.984978    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:32.984984    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:32.988672    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:32.988757    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:33.484801    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:33.484813    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:33.484818    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:33.484823    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:33.487037    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:33.985797    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:33.985821    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:33.985834    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:33.985843    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:33.989368    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:34.484289    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:34.484304    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:34.484313    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:34.484318    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:34.486642    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:34.985159    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:34.985174    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:34.985181    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:34.985184    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:34.987765    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:35.484974    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:35.484995    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:35.485006    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:35.485012    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:35.488175    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:35.488288    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:35.984879    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:35.984901    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:35.984913    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:35.984918    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:35.988822    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:36.485651    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:36.485664    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:36.485671    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:36.485673    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:36.488116    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:36.985565    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:36.985584    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:36.985595    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:36.985601    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:36.989216    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:37.485779    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:37.485862    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:37.485877    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:37.485882    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:37.489350    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:37.489427    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:37.984128    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:37.984140    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:37.984146    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:37.984150    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:37.986646    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:38.485023    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:38.485039    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:38.485048    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:38.485052    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:38.487768    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:38.984183    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:38.984206    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:38.984261    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:38.984269    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:38.987325    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:39.485275    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:39.485321    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:39.485334    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:39.485338    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:39.487742    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:39.985699    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:39.985718    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:39.985729    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:39.985737    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:39.988773    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:39.988844    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:40.484531    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:40.484546    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:40.484554    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:40.484559    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:40.487018    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:40.985498    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:40.985513    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:40.985520    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:40.985524    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:40.987999    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:41.484329    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:41.484342    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:41.484347    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:41.484351    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:41.486849    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:41.984847    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:41.984871    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:41.984882    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:41.984889    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:41.988357    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:42.484908    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:42.484921    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:42.484927    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:42.484931    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:42.487626    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:42.487688    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:42.985273    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:42.985299    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:42.985311    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:42.985325    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:42.988684    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:43.485086    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:43.485111    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:43.485128    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:43.485134    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:43.488939    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:43.983910    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:43.983926    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:43.983933    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:43.983936    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:43.986292    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:44.484259    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:44.484279    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:44.484291    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:44.484328    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:44.486962    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:44.984437    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:44.984457    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:44.984467    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:44.984475    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:44.987835    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:44.987961    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:45.484938    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:45.484953    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:45.484961    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:45.484964    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:45.487461    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:45.985086    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:45.985109    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:45.985119    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:45.985124    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:45.988699    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:46.484276    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:46.484299    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:46.484311    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:46.484319    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:46.488509    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:25:46.983907    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:46.983920    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:46.983926    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:46.983929    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:46.986359    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:47.485117    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:47.485136    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:47.485145    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:47.485150    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:47.487992    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:47.488052    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:47.984816    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:47.984870    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:47.984882    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:47.984891    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:47.988129    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:48.483883    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:48.483900    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:48.483906    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:48.483911    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:48.486198    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:48.984169    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:48.984190    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:48.984203    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:48.984208    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:48.987942    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:49.484903    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:49.484919    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:49.484927    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:49.484933    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:49.487106    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:49.984353    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:49.984369    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:49.984375    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:49.984378    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:49.987041    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:49.987105    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:50.485525    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:50.485573    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:50.485599    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:50.485608    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:50.489590    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:50.983824    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:50.983847    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:50.983858    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:50.983864    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:50.987088    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:51.484527    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:51.484553    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:51.484560    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:51.484563    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:51.489758    4656 round_trippers.go:574] Response Status: 404 Not Found in 5 milliseconds
	I0816 10:25:51.984190    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:51.984202    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:51.984208    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:51.984212    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:51.986039    4656 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0816 10:25:52.484065    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:52.484112    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:52.484125    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:52.484132    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:52.487172    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:52.487316    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:52.984150    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:52.984166    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:52.984173    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:52.984175    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:52.986345    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:53.484269    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:53.484284    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:53.484293    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:53.484296    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:53.486726    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:53.985717    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:53.985742    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:53.985759    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:53.985765    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:53.989726    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:54.484319    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:54.484335    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:54.484342    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:54.484345    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:54.486811    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:54.984778    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:54.984800    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:54.984808    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:54.984812    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:54.987368    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:54.987445    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:55.484244    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:55.484267    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:55.484278    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:55.484286    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:55.488016    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:55.985068    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:55.985083    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:55.985090    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:55.985093    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:55.987495    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:56.484782    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:56.484807    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:56.484819    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:56.484826    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:56.488310    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:56.984397    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:56.984419    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:56.984431    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:56.984439    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:56.988216    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:56.988289    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:57.483589    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:57.483605    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:57.483611    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:57.483616    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:57.486165    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:57.985574    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:57.985599    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:57.985611    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:57.985616    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:57.989363    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:58.484270    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:58.484308    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:58.484320    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:58.484325    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:58.487918    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:58.983666    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:58.983682    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:58.983689    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:58.983697    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:58.985851    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:25:59.483521    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:59.483543    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:59.483554    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:59.483560    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:59.487399    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:25:59.487469    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:25:59.984232    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:25:59.984247    4656 round_trippers.go:469] Request Headers:
	I0816 10:25:59.984255    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:25:59.984260    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:25:59.986963    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:00.483820    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:00.483833    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:00.483839    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:00.483842    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:00.486243    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:00.983904    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:00.983929    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:00.983941    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:00.983945    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:00.988101    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:26:01.484375    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:01.484399    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:01.484411    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:01.484448    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:01.488415    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:01.488502    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:01.983385    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:01.983401    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:01.983408    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:01.983411    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:01.985938    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:02.483425    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:02.483445    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:02.483457    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:02.483465    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:02.487166    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:02.984027    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:02.984094    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:02.984108    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:02.984117    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:02.987822    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:03.483320    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:03.483335    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:03.483341    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:03.483344    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:03.485639    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:03.985036    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:03.985059    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:03.985073    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:03.985077    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:03.988791    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:03.988858    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:04.483621    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:04.483639    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:04.483651    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:04.483658    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:04.487066    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:04.983859    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:04.983875    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:04.983882    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:04.983886    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:04.986493    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:05.483389    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:05.483408    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:05.483418    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:05.483422    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:05.486586    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:05.984366    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:05.984385    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:05.984397    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:05.984404    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:05.988161    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:06.483211    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:06.483226    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:06.483232    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:06.483235    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:06.485660    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:06.485720    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:06.983347    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:06.983366    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:06.983377    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:06.983386    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:06.986526    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:07.484090    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:07.484111    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:07.484123    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:07.484128    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:07.488198    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:26:07.983724    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:07.983740    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:07.983747    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:07.983750    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:07.986537    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:08.484146    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:08.484166    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:08.484178    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:08.484183    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:08.487983    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:08.488057    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:08.984192    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:08.984213    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:08.984224    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:08.984229    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:08.988294    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:26:09.484029    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:09.484043    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:09.484049    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:09.484052    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:09.486705    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:09.985246    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:09.985271    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:09.985283    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:09.985288    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:09.989175    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:10.483317    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:10.483343    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:10.483354    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:10.483360    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:10.486962    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:10.983808    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:10.983827    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:10.983852    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:10.983857    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:10.986240    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:10.986308    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:11.483336    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:11.483358    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:11.483369    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:11.483379    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:11.486931    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:11.984519    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:11.984661    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:11.984687    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:11.984697    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:11.988638    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:12.484861    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:12.484877    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:12.484886    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:12.484889    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:12.487390    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:12.983427    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:12.983451    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:12.983463    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:12.983469    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:12.986694    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:12.986771    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:13.484765    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:13.484792    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:13.484805    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:13.484811    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:13.488619    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:13.983338    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:13.983352    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:13.983393    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:13.983399    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:13.985734    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:14.483998    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:14.484020    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:14.484032    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:14.484040    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:14.487538    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:14.984976    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:14.985003    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:14.985019    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:14.985025    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:14.988674    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:14.988745    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:15.483186    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:15.483201    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:15.483208    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:15.483212    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:15.485667    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:15.983775    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:15.983787    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:15.983794    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:15.983798    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:15.986102    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:16.483426    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:16.483449    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:16.483465    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:16.483473    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:16.487194    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:16.983030    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:16.983043    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:16.983049    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:16.983053    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:16.986507    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:17.484904    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:17.484932    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:17.484944    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:17.484951    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:17.488809    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:17.488909    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:17.983661    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:17.983682    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:17.983691    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:17.983700    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:17.987560    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:18.483005    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:18.483019    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:18.483043    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:18.483047    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:18.485247    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:18.982837    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:18.982858    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:18.982870    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:18.982877    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:18.986275    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:19.484274    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:19.484305    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:19.484343    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:19.484351    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:19.488293    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:19.983892    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:19.983907    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:19.983913    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:19.983917    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:19.986273    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:19.986330    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:20.483798    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:20.483825    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:20.483837    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:20.483843    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:20.487687    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:20.983298    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:20.983317    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:20.983329    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:20.983341    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:20.986753    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:21.483677    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:21.483697    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:21.483720    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:21.483722    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:21.486177    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:21.983903    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:21.983922    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:21.983934    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:21.983940    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:21.986911    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:21.986973    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:22.484112    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:22.484134    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:22.484147    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:22.484152    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:22.488262    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:26:22.983975    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:22.984028    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:22.984035    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:22.984039    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:22.986443    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:23.483009    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:23.483033    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:23.483050    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:23.483066    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:23.486675    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:23.983451    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:23.983483    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:23.983500    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:23.983511    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:23.987001    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:23.987063    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:24.483488    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:24.483536    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:24.483547    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:24.483551    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:24.485853    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:24.982731    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:24.982743    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:24.982750    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:24.982753    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:24.984789    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:25.483610    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:25.483630    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:25.483639    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:25.483645    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:25.487060    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:25.982597    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:25.982610    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:25.982622    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:25.982626    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:25.994285    4656 round_trippers.go:574] Response Status: 404 Not Found in 11 milliseconds
	I0816 10:26:25.994342    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:26.483108    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:26.483129    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:26.483141    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:26.483147    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:26.486703    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:26.984543    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:26.984561    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:26.984570    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:26.984574    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:26.987295    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:27.484057    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:27.484070    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:27.484076    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:27.484079    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:27.486438    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:27.982568    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:27.982579    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:27.982586    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:27.982589    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:27.984714    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:28.482928    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:28.482954    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:28.482966    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:28.482971    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:28.486982    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:28.487049    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:28.983984    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:28.984000    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:28.984007    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:28.984010    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:28.986187    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:29.482503    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:29.482527    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:29.482539    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:29.482545    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:29.485679    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:29.982668    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:29.982688    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:29.982700    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:29.982707    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:29.986106    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:30.483014    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:30.483035    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:30.483044    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:30.483048    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:30.485517    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:30.984509    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:30.984533    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:30.984544    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:30.984596    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:30.988289    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:30.988408    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:31.483916    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:31.483943    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:31.483981    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:31.483990    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:31.487890    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:31.982923    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:31.982943    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:31.982952    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:31.982956    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:31.985708    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:32.483569    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:32.483593    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:32.483605    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:32.483616    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:32.487327    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:32.982635    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:32.982661    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:32.982673    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:32.982679    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:32.986374    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:33.482846    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:33.482858    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:33.482872    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:33.482882    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:33.485277    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:33.485339    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:33.982793    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:33.982819    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:33.982831    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:33.982836    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:33.986153    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:34.482560    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:34.482578    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:34.482604    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:34.482610    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:34.486015    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:34.982428    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:34.982450    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:34.982463    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:34.982469    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:34.985873    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:35.483727    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:35.483740    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:35.483747    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:35.483751    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:35.485833    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:35.485894    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:35.982916    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:35.982943    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:35.982955    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:35.982965    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:35.986742    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:36.483103    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:36.483123    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:36.483132    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:36.483135    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:36.485868    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:36.982704    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:36.982762    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:36.982776    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:36.982790    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:36.986222    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:37.483468    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:37.483488    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:37.483500    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:37.483506    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:37.487244    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:37.487314    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:37.983372    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:37.983388    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:37.983394    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:37.983397    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:37.985922    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:38.483160    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:38.483179    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:38.483191    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:38.483199    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:38.486492    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:38.982468    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:38.982483    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:38.982489    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:38.982493    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:38.984866    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:39.482442    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:39.482495    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:39.482503    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:39.482507    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:39.484936    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:39.982412    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:39.982432    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:39.982441    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:39.982450    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:39.986230    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:39.986305    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:40.483055    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:40.483077    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:40.483087    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:40.483096    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:40.486444    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:40.983022    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:40.983056    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:40.983064    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:40.983068    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:40.985224    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:41.482184    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:41.482204    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:41.482215    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:41.482220    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:41.485468    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:41.983203    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:41.983227    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:41.983306    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:41.983320    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:41.987091    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:41.987171    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:42.483067    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:42.483083    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:42.483092    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:42.483096    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:42.485854    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:42.982325    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:42.982346    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:42.982358    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:42.982367    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:42.985247    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:43.482212    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:43.482232    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:43.482245    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:43.482253    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:43.485500    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:43.982210    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:43.982226    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:43.982232    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:43.982235    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:43.984789    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:44.483719    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:44.483739    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:44.483750    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:44.483758    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:44.487463    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:44.487539    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:44.984070    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:44.984094    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:44.984106    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:44.984112    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:44.987930    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:45.483159    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:45.483174    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:45.483183    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:45.483188    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:45.485689    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:45.982348    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:45.982376    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:45.982441    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:45.982451    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:45.986431    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:46.483035    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:46.483061    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:46.483073    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:46.483079    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:46.487152    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:26:46.982639    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:46.982696    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:46.982710    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:46.982717    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:46.986259    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:46.986315    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:47.482155    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:47.482188    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:47.482237    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:47.482249    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:47.485627    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:47.983982    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:47.984007    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:47.984020    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:47.984026    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:47.988122    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:26:48.482121    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:48.482168    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:48.482175    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:48.482179    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:48.484595    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:48.983532    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:48.983556    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:48.983569    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:48.983574    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:48.987409    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:48.987484    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:49.483718    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:49.483736    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:49.483748    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:49.483754    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:49.487115    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:49.982660    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:49.982682    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:49.982692    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:49.982696    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:49.985469    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:50.481995    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:50.482014    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:50.482032    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:50.482058    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:50.485582    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:50.981809    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:50.981828    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:50.981835    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:50.981839    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:50.984238    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:51.482206    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:51.482226    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:51.482236    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:51.482241    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:51.485102    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:51.485201    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:51.983488    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:51.983503    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:51.983512    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:51.983516    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:51.986249    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:52.482268    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:52.482293    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:52.482304    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:52.482311    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:52.485931    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:52.983543    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:52.983556    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:52.983562    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:52.983564    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:52.987568    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:53.482529    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:53.482553    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:53.482590    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:53.482612    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:53.486396    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:53.486481    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:53.983382    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:53.983409    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:53.983421    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:53.983426    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:53.987647    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:26:54.482288    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:54.482367    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:54.482378    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:54.482383    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:54.484925    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:54.983458    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:54.983478    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:54.983490    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:54.983497    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:54.987016    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:55.482017    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:55.482037    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:55.482048    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:55.482054    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:55.485201    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:55.983339    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:55.983353    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:55.983360    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:55.983377    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:55.985849    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:55.985910    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:56.483753    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:56.483779    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:56.483792    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:56.483798    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:56.487683    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:56.983682    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:56.983735    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:56.983749    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:56.983758    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:56.987724    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:57.481708    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:57.481724    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:57.481730    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:57.481733    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:57.483972    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:57.983723    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:57.983751    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:57.983772    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:57.983782    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:57.987662    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:57.987781    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:26:58.481946    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:58.481978    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:58.481989    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:58.481998    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:58.485616    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:58.982478    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:58.982494    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:58.982501    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:58.982503    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:58.984797    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:26:59.482635    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:59.482661    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:59.482672    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:59.482678    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:59.486199    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:26:59.983080    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:26:59.983108    4656 round_trippers.go:469] Request Headers:
	I0816 10:26:59.983179    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:26:59.983189    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:26:59.986765    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:00.481883    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:00.481904    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:00.481916    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:00.481923    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:00.485164    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:00.485241    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:00.983581    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:00.983606    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:00.983618    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:00.983626    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:00.987095    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:01.481499    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:01.481518    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:01.481530    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:01.481536    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:01.484541    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:01.981949    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:01.981971    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:01.981980    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:01.981985    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:01.984730    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:02.483014    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:02.483039    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:02.483050    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:02.483057    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:02.486856    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:02.486952    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:02.982039    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:02.982061    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:02.982075    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:02.982083    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:02.986009    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:03.482044    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:03.482058    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:03.482064    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:03.482068    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:03.484293    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:03.982493    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:03.982521    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:03.982589    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:03.982599    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:03.986547    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:04.481423    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:04.481443    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:04.481481    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:04.481492    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:04.484534    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:04.981631    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:04.981650    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:04.981659    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:04.981665    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:04.984478    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:04.984535    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:05.481850    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:05.481876    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:05.481888    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:05.481895    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:05.485885    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:05.983485    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:05.983508    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:05.983520    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:05.983529    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:05.987747    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:27:06.481638    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:06.481654    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:06.481660    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:06.481666    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:06.483910    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:06.982417    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:06.982443    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:06.982456    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:06.982461    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:06.986711    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:27:06.986836    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:07.482901    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:07.482925    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:07.482937    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:07.482944    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:07.486790    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:07.981354    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:07.981370    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:07.981376    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:07.981380    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:07.984233    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:08.482884    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:08.482907    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:08.482918    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:08.482923    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:08.486675    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:08.983285    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:08.983308    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:08.983320    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:08.983362    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:08.987075    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:08.987178    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:09.481582    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:09.481596    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:09.481602    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:09.481615    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:09.484345    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:09.982946    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:09.982968    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:09.982980    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:09.982987    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:09.987241    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:27:10.482214    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:10.482233    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:10.482245    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:10.482250    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:10.485342    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:10.981598    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:10.981613    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:10.981647    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:10.981651    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:10.983798    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:11.481915    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:11.481938    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:11.481949    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:11.481956    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:11.485887    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:11.485960    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:11.982040    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:11.982065    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:11.982077    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:11.982085    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:11.985843    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:12.481119    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:12.481134    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:12.481140    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:12.481144    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:12.483753    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:12.983314    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:12.983335    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:12.983348    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:12.983354    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:12.987658    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:27:13.483200    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:13.483225    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:13.483237    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:13.483242    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:13.487000    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:13.487075    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:13.981082    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:13.981098    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:13.981104    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:13.981107    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:13.983666    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:14.481510    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:14.481533    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:14.481546    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:14.481553    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:14.485493    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:14.982587    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:14.982611    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:14.982623    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:14.982632    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:14.986953    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:27:15.481989    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:15.482002    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:15.482008    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:15.482011    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:15.484306    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:15.983142    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:15.983197    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:15.983212    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:15.983220    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:15.987145    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:15.987217    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:16.482640    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:16.482663    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:16.482676    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:16.482682    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:16.486588    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:16.982739    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:16.982758    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:16.982767    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:16.982771    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:16.985870    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:17.482222    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:17.482247    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:17.482259    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:17.482264    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:17.486553    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:27:17.982295    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:17.982319    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:17.982345    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:17.982355    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:17.986295    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:18.481466    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:18.481480    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:18.481501    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:18.481505    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:18.484182    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:18.484250    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:18.981829    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:18.981869    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:18.981879    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:18.981887    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:18.984310    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:19.481304    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:19.481354    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:19.481368    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:19.481374    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:19.485047    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:19.981003    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:19.981016    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:19.981022    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:19.981026    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:19.983258    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:20.482082    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:20.482099    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:20.482107    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:20.482110    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:20.484774    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:20.484831    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:20.982149    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:20.982161    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:20.982167    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:20.982171    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:20.984491    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:21.482759    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:21.482774    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:21.482784    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:21.482805    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:21.488307    4656 round_trippers.go:574] Response Status: 404 Not Found in 5 milliseconds
	I0816 10:27:21.980923    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:21.980944    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:21.980956    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:21.980962    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:21.985236    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:27:22.480954    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:22.480982    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:22.481000    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:22.481007    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:22.484623    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:22.982155    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:22.982170    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:22.982177    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:22.982183    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:22.985131    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:22.985233    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:23.481447    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:23.481473    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:23.481485    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:23.481493    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:23.485171    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:23.980807    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:23.980841    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:23.980854    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:23.980886    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:23.984726    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:24.481009    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:24.481023    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:24.481030    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:24.481033    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:24.483629    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:24.981780    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:24.981800    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:24.981812    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:24.981817    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:24.985032    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:25.482336    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:25.482370    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:25.482430    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:25.482437    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:25.486196    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:25.486271    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:25.981022    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:25.981035    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:25.981041    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:25.981048    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:25.983833    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:26.481578    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:26.481603    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:26.481614    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:26.481620    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:26.485938    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:27:26.981068    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:26.981108    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:26.981117    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:26.981122    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:26.983762    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:27.481705    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:27.481739    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:27.481747    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:27.481751    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:27.484193    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:27.981754    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:27.981779    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:27.981791    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:27.981804    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:27.985583    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:27.985651    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:28.481144    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:28.481173    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:28.481209    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:28.481216    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:28.484725    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:28.981756    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:28.981769    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:28.981776    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:28.981779    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:28.984303    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:29.481471    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:29.481547    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:29.481562    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:29.481571    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:29.484980    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:29.981350    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:29.981376    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:29.981388    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:29.981394    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:29.985134    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:30.481784    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:30.481800    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:30.481807    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:30.481810    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:30.483987    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:30.484040    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:30.981042    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:30.981064    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:30.981075    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:30.981082    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:30.985035    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:31.480553    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:31.480568    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:31.480576    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:31.480580    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:31.483746    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:31.981346    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:31.981362    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:31.981368    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:31.981372    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:31.983579    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:32.481011    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:32.481036    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:32.481048    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:32.481054    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:32.484005    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:32.484066    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:32.980838    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:32.980858    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:32.980869    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:32.980876    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:32.984769    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:33.481797    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:33.481813    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:33.481819    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:33.481822    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:33.484075    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:33.980538    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:33.980569    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:33.980581    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:33.980586    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:33.984292    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:34.480611    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:34.480633    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:34.480644    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:34.480650    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:34.484424    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:34.484495    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:34.980662    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:34.980675    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:34.980685    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:34.980688    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:34.983333    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:35.481072    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:35.481093    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:35.481104    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:35.481109    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:35.484858    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:35.980573    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:35.980600    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:35.980613    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:35.980619    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:35.984318    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:36.481723    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:36.481742    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:36.481750    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:36.481755    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:36.484525    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:36.484582    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:36.981468    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:36.981491    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:36.981534    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:36.981541    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:36.985480    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:37.481087    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:37.481115    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:37.481127    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:37.481133    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:37.484349    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:37.981606    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:37.981618    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:37.981624    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:37.981628    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:37.984174    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:38.480919    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:38.480942    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:38.480954    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:38.480960    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:38.484462    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:38.484530    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:38.980883    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:38.980958    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:38.980971    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:38.980976    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:38.985426    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:27:39.480691    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:39.480705    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:39.480711    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:39.480714    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:39.483370    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:39.980523    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:39.980543    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:39.980554    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:39.980559    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:39.983705    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:40.480857    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:40.480870    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:40.480876    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:40.480880    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:40.483015    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:40.980527    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:40.980547    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:40.980559    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:40.980566    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:40.984425    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:40.984557    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:41.480215    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:41.480250    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:41.480259    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:41.480264    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:41.482681    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:41.980221    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:41.980233    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:41.980238    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:41.980241    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:41.983101    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:42.481763    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:42.481782    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:42.481794    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:42.481801    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:42.484939    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:42.981092    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:42.981114    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:42.981125    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:42.981131    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:42.985191    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:27:42.985282    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:43.481456    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:43.481481    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:43.481493    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:43.481498    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:43.485020    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:43.981686    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:43.981734    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:43.981742    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:43.981745    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:43.984138    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:44.480895    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:44.480921    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:44.480934    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:44.480940    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:44.484864    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:44.980350    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:44.980376    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:44.980388    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:44.980394    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:44.984559    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:27:45.480493    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:45.480509    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:45.480518    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:45.480523    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:45.483088    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:45.483193    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:45.981740    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:45.981766    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:45.981778    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:45.981787    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:45.985812    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:27:46.480744    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:46.480771    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:46.480782    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:46.480788    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:46.484433    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:46.980028    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:46.980044    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:46.980052    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:46.980058    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:46.982468    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:47.480811    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:47.480834    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:47.480846    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:47.480854    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:47.484154    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:47.484225    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:47.981495    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:47.981558    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:47.981573    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:47.981579    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:47.984852    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:48.481331    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:48.481350    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:48.481357    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:48.481360    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:48.483672    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:48.981308    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:48.981334    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:48.981345    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:48.981351    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:48.987316    4656 round_trippers.go:574] Response Status: 404 Not Found in 5 milliseconds
	I0816 10:27:49.480610    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:49.480631    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:49.480642    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:49.480650    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:49.484493    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:49.484576    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:49.980270    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:49.980291    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:49.980303    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:49.980311    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:49.983514    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:50.480630    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:50.480652    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:50.480663    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:50.480672    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:50.484716    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:27:50.980998    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:50.981031    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:50.981079    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:50.981089    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:50.984717    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:51.481764    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:51.481781    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:51.481788    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:51.481792    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:51.483882    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:51.981147    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:51.981167    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:51.981178    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:51.981185    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:51.984837    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:51.984916    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:52.480088    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:52.480109    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:52.480121    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:52.480126    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:52.483987    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:52.980987    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:52.981013    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:52.981029    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:52.981059    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:52.984581    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:53.480043    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:53.480063    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:53.480084    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:53.480092    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:53.483664    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:53.980634    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:53.980693    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:53.980706    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:53.980711    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:53.984482    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:54.480029    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:54.480042    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:54.480051    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:54.480056    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:54.482803    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:54.482872    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:54.980002    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:54.980026    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:54.980038    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:54.980043    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:54.983690    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:55.480147    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:55.480213    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:55.480241    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:55.480251    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:55.484002    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:55.980804    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:55.980819    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:55.980825    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:55.980828    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:55.982902    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:56.480975    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:56.480997    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:56.481006    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:56.481011    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:56.484989    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:56.485061    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:56.980849    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:56.980870    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:56.980880    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:56.980888    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:56.984648    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:57.479708    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:57.479723    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:57.479732    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:57.479736    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:57.482298    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:27:57.979711    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:57.979729    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:57.979741    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:57.979746    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:57.983031    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:58.481734    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:58.481790    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:58.481805    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:58.481814    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:58.486010    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:27:58.486113    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:27:58.980860    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:58.980917    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:58.980929    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:58.980937    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:58.984281    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:59.480008    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:59.480075    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:59.480090    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:59.480100    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:59.483377    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:27:59.981599    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:27:59.981621    4656 round_trippers.go:469] Request Headers:
	I0816 10:27:59.981633    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:27:59.981639    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:27:59.985606    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:00.480770    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:00.480786    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:00.480795    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:00.480798    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:00.483310    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:28:00.980781    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:00.980807    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:00.980817    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:00.980824    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:00.984773    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:00.984872    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:01.480191    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:01.480210    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:01.480218    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:01.480222    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:01.482706    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:28:01.979918    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:01.979940    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:01.979950    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:01.979955    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:01.982361    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:28:02.481286    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:02.481302    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:02.481308    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:02.481311    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:02.483655    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:28:02.980572    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:02.980632    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:02.980646    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:02.980655    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:02.984337    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:03.479541    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:03.479553    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:03.479560    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:03.479562    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:03.482043    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:28:03.482109    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:03.980816    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:03.980840    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:03.980877    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:03.980906    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:03.984861    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:04.481240    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:04.481266    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:04.481276    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:04.481282    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:04.485558    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:28:04.981353    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:04.981413    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:04.981429    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:04.981438    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:04.984812    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:05.480489    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:05.480511    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:05.480523    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:05.480528    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:05.484058    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:05.484144    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:05.979456    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:05.979471    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:05.979480    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:05.979485    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:05.981941    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:28:06.480803    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:06.480823    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:06.480834    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:06.480841    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:06.483869    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:06.980347    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:06.980368    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:06.980379    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:06.980384    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:06.983544    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:07.479393    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:07.479421    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:07.479481    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:07.479491    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:07.483249    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:07.979964    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:07.979979    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:07.979985    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:07.979988    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:07.983187    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:07.983251    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:08.479456    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:08.479474    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:08.479486    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:08.479493    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:08.483132    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:08.980053    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:08.980073    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:08.980083    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:08.980090    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:08.983933    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:09.481215    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:09.481229    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:09.481237    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:09.481242    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:09.483856    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:28:09.980082    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:09.980109    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:09.980121    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:09.980129    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:09.983657    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:09.983727    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:10.481137    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:10.481162    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:10.481171    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:10.481178    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:10.485023    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:10.979382    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:10.979406    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:10.979418    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:10.979425    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:10.982616    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:11.480878    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:11.480900    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:11.480924    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:11.480931    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:11.484400    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:11.980148    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:11.980201    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:11.980213    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:11.980220    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:11.983261    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:12.479546    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:12.479558    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:12.479564    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:12.479568    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:12.482006    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:28:12.482066    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:12.980407    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:12.980433    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:12.980446    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:12.980455    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:12.984259    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:13.481285    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:13.481304    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:13.481316    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:13.481321    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:13.484864    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:13.980948    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:13.980967    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:13.981024    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:13.981032    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:13.983792    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:28:14.480529    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:14.480592    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:14.480607    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:14.480615    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:14.485369    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:28:14.485425    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:14.980508    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:14.980528    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:14.980540    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:14.980546    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:14.984308    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:15.479351    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:15.479366    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:15.479375    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:15.479378    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:15.482333    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:28:15.979273    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:15.979296    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:15.979308    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:15.979317    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:15.983036    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:16.480267    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:16.480288    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:16.480300    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:16.480306    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:16.484104    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:16.979260    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:16.979282    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:16.979294    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:16.979302    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:16.983145    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:16.983218    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:17.479986    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:17.480012    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:17.480023    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:17.480031    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:17.483621    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:17.980230    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:17.980255    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:17.980267    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:17.980273    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:17.983388    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:18.479428    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:18.479444    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:18.479452    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:18.479457    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:18.482401    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:28:18.980054    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:18.980078    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:18.980090    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:18.980111    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:18.984291    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:28:18.984384    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:19.479204    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:19.479223    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:19.479235    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:19.479241    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:19.482609    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:19.980334    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:19.980358    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:19.980370    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:19.980376    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:19.984055    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:20.479678    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:20.479704    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:20.479716    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:20.479722    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:20.483940    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:28:20.980207    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:20.980232    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:20.980243    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:20.980248    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:20.984073    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:21.479009    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:21.479028    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:21.479039    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:21.479045    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:21.482870    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:21.482946    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:21.979028    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:21.979048    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:21.979060    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:21.979067    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:21.982782    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:22.480202    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:22.480229    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:22.480242    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:22.480248    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:22.484332    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:28:22.979809    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:22.979829    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:22.979861    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:22.979867    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:22.982210    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:28:23.480520    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:23.480541    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:23.480556    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:23.480564    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:23.484344    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:23.484415    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:23.978872    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:23.978890    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:23.978939    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:23.978947    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:23.981588    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:28:24.478940    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:24.479024    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:24.479038    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:24.479046    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:24.482719    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:24.980016    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:24.980040    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:24.980053    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:24.980061    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:24.984315    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:28:25.478940    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:25.478960    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:25.478971    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:25.478978    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:25.483052    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:28:25.979269    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:25.979296    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:25.979308    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:25.979314    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:25.983114    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:25.983257    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:26.479781    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:26.479806    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:26.479817    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:26.479828    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:26.483419    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:26.979605    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:26.979626    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:26.979637    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:26.979644    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:26.982753    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:27.479413    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:27.479438    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:27.479450    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:27.479458    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:27.483110    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:27.980825    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:27.980852    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:27.980863    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:27.980870    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:27.984767    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:27.984839    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:28.479839    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:28.479867    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:28.479880    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:28.479888    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:28.483764    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:28.978775    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:28.978797    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:28.978808    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:28.978815    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:28.982911    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:28:29.480812    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:29.480838    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:29.480848    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:29.480854    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:29.484272    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:29.980179    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:29.980196    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:29.980204    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:29.980208    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:29.983010    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:28:30.479018    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:30.479037    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:30.479056    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:30.479060    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:30.480976    4656 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0816 10:28:30.481040    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:30.979780    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:30.979800    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:30.979810    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:30.979815    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:30.983686    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:31.479047    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:31.479069    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:31.479081    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:31.479088    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:31.482916    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:31.979327    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:31.979383    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:31.979396    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:31.979406    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:31.982781    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:32.479680    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:32.479701    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:32.479712    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:32.479718    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:32.483452    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:32.483551    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:32.979627    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:32.979653    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:32.979665    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:32.979672    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:32.983502    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:33.479195    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:33.479213    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:33.479223    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:33.479231    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:33.482627    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:33.978591    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:33.978614    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:33.978669    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:33.978677    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:33.982499    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:34.478777    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:34.478796    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:34.478805    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:34.478810    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:34.481463    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:28:34.979814    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:34.979835    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:34.979847    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:34.979856    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:34.984020    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:28:34.984095    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:35.478731    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:35.478759    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:35.478769    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:35.478775    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:35.482596    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:35.979086    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:35.979114    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:35.979127    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:35.979133    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:35.982826    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:36.478524    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:36.478548    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:36.478560    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:36.478568    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:36.482514    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:36.978759    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:36.978778    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:36.978789    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:36.978795    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:36.982532    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:37.478813    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:37.478836    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:37.478848    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:37.478854    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:37.482815    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:37.483027    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:37.980493    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:37.980519    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:37.980530    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:37.980535    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:37.984193    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:38.479572    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:38.479594    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:38.479607    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:38.479615    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:38.483949    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:28:38.980347    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:38.980372    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:38.980383    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:38.980388    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:38.984077    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:39.480084    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:39.480110    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:39.480121    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:39.480127    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:39.483858    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:39.483927    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:39.978886    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:39.978908    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:39.978920    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:39.978927    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:39.982482    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:40.478804    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:40.478830    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:40.478841    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:40.478847    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:40.482793    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:40.979356    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:40.979380    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:40.979392    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:40.979401    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:40.983583    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:28:41.479873    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:41.479894    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:41.479913    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:41.479918    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:41.483490    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:41.978368    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:41.978382    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:41.978389    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:41.978393    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:41.984198    4656 round_trippers.go:574] Response Status: 404 Not Found in 5 milliseconds
	I0816 10:28:41.984261    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:42.478642    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:42.478662    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:42.478675    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:42.478681    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:42.482721    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:28:42.979333    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:42.979358    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:42.979370    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:42.979376    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:42.983591    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:28:43.478780    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:43.478803    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:43.478816    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:43.478824    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:43.482771    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:43.978807    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:43.978858    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:43.978871    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:43.978878    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:43.982183    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:44.479103    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:44.479131    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:44.479208    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:44.479217    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:44.483010    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:44.483102    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:44.980168    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:44.980193    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:44.980205    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:44.980212    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:44.984284    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:28:45.478814    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:45.478840    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:45.478851    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:45.478856    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:45.482566    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:45.978463    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:45.978490    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:45.978503    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:45.978509    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:45.982104    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:46.478332    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:46.478358    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:46.478370    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:46.478376    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:46.482196    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:46.980202    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:46.980226    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:46.980235    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:46.980242    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:46.984038    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:46.984113    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:47.480191    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:47.480236    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:47.480249    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:47.480256    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:47.483962    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:47.978487    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:47.978512    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:47.978524    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:47.978529    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:47.982450    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:48.478150    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:48.478167    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:48.478183    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:48.478192    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:48.481632    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:48.978324    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:48.978347    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:48.978359    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:48.978366    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:48.982094    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:49.479467    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:49.479488    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:49.479500    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:49.479508    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:49.483304    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:49.483387    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:49.979540    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:49.979559    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:49.979567    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:49.979571    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:49.982173    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:28:50.478844    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:50.478865    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:50.478876    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:50.478882    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:50.482687    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:50.979032    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:50.979057    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:50.979069    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:50.979075    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:50.982937    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:51.477969    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:51.477985    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:51.477992    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:51.477996    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:51.480844    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:28:51.978499    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:51.978525    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:51.978594    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:51.978604    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:51.982296    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:51.982369    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:52.478660    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:52.478681    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:52.478693    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:52.478700    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:52.482493    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:52.979157    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:52.979218    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:52.979232    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:52.979243    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:52.982949    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:53.477935    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:53.477952    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:53.477964    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:53.477971    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:53.481445    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:53.979399    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:53.979426    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:53.979437    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:53.979442    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:53.983298    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:53.983373    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:54.477959    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:54.477983    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:54.477992    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:54.478000    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:54.480818    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:28:54.977914    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:54.977928    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:54.977937    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:54.977943    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:54.980985    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:55.477939    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:55.477959    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:55.477971    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:55.477980    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:55.481823    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:55.978706    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:55.978725    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:55.978734    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:55.978740    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:55.981215    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:28:56.478017    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:56.478041    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:56.478055    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:56.478066    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:56.481827    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:56.481901    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:56.979955    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:56.979976    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:56.979987    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:56.979994    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:56.984295    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:28:57.478039    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:57.478057    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:57.478067    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:57.478073    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:57.481105    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:57.978248    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:57.978270    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:57.978283    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:57.978291    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:57.982239    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:58.477943    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:58.477971    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:58.477987    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:58.478001    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:58.481727    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:58.978661    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:58.978678    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:58.978687    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:58.978693    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:58.981579    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:28:58.981644    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:28:59.479830    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:59.479861    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:59.479927    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:59.479949    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:59.483371    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:28:59.977787    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:28:59.977804    4656 round_trippers.go:469] Request Headers:
	I0816 10:28:59.977810    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:28:59.977813    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:28:59.979974    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:29:00.478024    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:29:00.478039    4656 round_trippers.go:469] Request Headers:
	I0816 10:29:00.478047    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:29:00.478051    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:29:00.480707    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:29:00.979674    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:29:00.979700    4656 round_trippers.go:469] Request Headers:
	I0816 10:29:00.979712    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:29:00.979718    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:29:00.983620    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:29:00.983742    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:29:01.478022    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:29:01.478042    4656 round_trippers.go:469] Request Headers:
	I0816 10:29:01.478053    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:29:01.478060    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:29:01.481326    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:29:01.978405    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:29:01.978425    4656 round_trippers.go:469] Request Headers:
	I0816 10:29:01.978434    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:29:01.978438    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:29:01.981188    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:29:02.479658    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:29:02.479772    4656 round_trippers.go:469] Request Headers:
	I0816 10:29:02.479790    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:29:02.479798    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:29:02.483872    4656 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0816 10:29:02.979772    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:29:02.979794    4656 round_trippers.go:469] Request Headers:
	I0816 10:29:02.979807    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:29:02.979815    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:29:02.983496    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:29:03.477789    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:29:03.477808    4656 round_trippers.go:469] Request Headers:
	I0816 10:29:03.477817    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:29:03.477821    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:29:03.480617    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:29:03.480674    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:29:03.977650    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:29:03.977672    4656 round_trippers.go:469] Request Headers:
	I0816 10:29:03.977683    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:29:03.977689    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:29:03.981168    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:29:04.479691    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:29:04.479717    4656 round_trippers.go:469] Request Headers:
	I0816 10:29:04.479729    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:29:04.479737    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:29:04.483384    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:29:04.978063    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:29:04.978077    4656 round_trippers.go:469] Request Headers:
	I0816 10:29:04.978086    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:29:04.978091    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:29:04.980657    4656 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 10:29:05.479407    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:29:05.479427    4656 round_trippers.go:469] Request Headers:
	I0816 10:29:05.479438    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:29:05.479443    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:29:05.482914    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:29:05.483084    4656 node_ready.go:53] error getting node "ha-286000-m03": nodes "ha-286000-m03" not found
	I0816 10:29:05.979238    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:29:05.979260    4656 round_trippers.go:469] Request Headers:
	I0816 10:29:05.979272    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:29:05.979280    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:29:05.982997    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:29:06.478226    4656 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-286000-m03
	I0816 10:29:06.478251    4656 round_trippers.go:469] Request Headers:
	I0816 10:29:06.478264    4656 round_trippers.go:473]     Accept: application/json, */*
	I0816 10:29:06.478270    4656 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 10:29:06.482103    4656 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0816 10:29:06.482169    4656 node_ready.go:38] duration metric: took 4m0.00480463s for node "ha-286000-m03" to be "Ready" ...
	I0816 10:29:06.503388    4656 out.go:201] 
	W0816 10:29:06.524396    4656 out.go:270] X Exiting due to GUEST_START: failed to start node: adding node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	W0816 10:29:06.524419    4656 out.go:270] * 
	W0816 10:29:06.525619    4656 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 10:29:06.587617    4656 out.go:201] 
	
	
	==> Docker <==
	Aug 16 17:24:28 ha-286000 cri-dockerd[1436]: time="2024-08-16T17:24:28Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/137dbec658acee61ce1910017edb0f5b3a85b75c5e3049e8bd90f1dbefcdb1c7/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Aug 16 17:24:29 ha-286000 dockerd[1187]: time="2024-08-16T17:24:28.998809824Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 16 17:24:29 ha-286000 dockerd[1187]: time="2024-08-16T17:24:28.998948255Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 16 17:24:29 ha-286000 dockerd[1187]: time="2024-08-16T17:24:28.998962428Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:24:29 ha-286000 dockerd[1187]: time="2024-08-16T17:24:28.999102266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:24:29 ha-286000 dockerd[1187]: time="2024-08-16T17:24:29.047276534Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 16 17:24:29 ha-286000 dockerd[1187]: time="2024-08-16T17:24:29.047427124Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 16 17:24:29 ha-286000 dockerd[1187]: time="2024-08-16T17:24:29.047450862Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:24:29 ha-286000 dockerd[1187]: time="2024-08-16T17:24:29.047581008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:24:29 ha-286000 dockerd[1187]: time="2024-08-16T17:24:29.126544781Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 16 17:24:29 ha-286000 dockerd[1187]: time="2024-08-16T17:24:29.126662219Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 16 17:24:29 ha-286000 dockerd[1187]: time="2024-08-16T17:24:29.126672757Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:24:29 ha-286000 dockerd[1187]: time="2024-08-16T17:24:29.126811937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:24:40 ha-286000 dockerd[1187]: time="2024-08-16T17:24:40.084727507Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 16 17:24:40 ha-286000 dockerd[1187]: time="2024-08-16T17:24:40.084839498Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 16 17:24:40 ha-286000 dockerd[1187]: time="2024-08-16T17:24:40.084854114Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:24:40 ha-286000 dockerd[1187]: time="2024-08-16T17:24:40.085367785Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:24:59 ha-286000 dockerd[1181]: time="2024-08-16T17:24:59.347142049Z" level=info msg="ignoring event" container=0c18c93270e7a68d0392d4acb324154ee21a1f857073e107621429751d23f788 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 16 17:24:59 ha-286000 dockerd[1187]: time="2024-08-16T17:24:59.347787162Z" level=info msg="shim disconnected" id=0c18c93270e7a68d0392d4acb324154ee21a1f857073e107621429751d23f788 namespace=moby
	Aug 16 17:24:59 ha-286000 dockerd[1187]: time="2024-08-16T17:24:59.347864246Z" level=warning msg="cleaning up after shim disconnected" id=0c18c93270e7a68d0392d4acb324154ee21a1f857073e107621429751d23f788 namespace=moby
	Aug 16 17:24:59 ha-286000 dockerd[1187]: time="2024-08-16T17:24:59.347873243Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 16 17:25:12 ha-286000 dockerd[1187]: time="2024-08-16T17:25:12.082815222Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 16 17:25:12 ha-286000 dockerd[1187]: time="2024-08-16T17:25:12.082919934Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 16 17:25:12 ha-286000 dockerd[1187]: time="2024-08-16T17:25:12.082946545Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:25:12 ha-286000 dockerd[1187]: time="2024-08-16T17:25:12.083100138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8803f7012c881       6e38f40d628db                                                                                         6 minutes ago       Running             storage-provisioner       4                   fca40ed5fc112       storage-provisioner
	88937b4d9b3fc       045733566833c                                                                                         7 minutes ago       Running             kube-controller-manager   2                   b20f8615dee49       kube-controller-manager-ha-286000
	fdeb6586df346       8c811b4aec35f                                                                                         7 minutes ago       Running             busybox                   1                   137dbec658ace       busybox-7dff88458-dvmvk
	f9023c4cc7d09       12968670680f4                                                                                         7 minutes ago       Running             kindnet-cni               1                   b874afa97d609       kindnet-whqxb
	3cf3b8e6c2561       cbb01a7bd410d                                                                                         7 minutes ago       Running             coredns                   1                   0b81f15659889       coredns-6f6b679f8f-2kqjf
	0c18c93270e7a       6e38f40d628db                                                                                         7 minutes ago       Exited              storage-provisioner       3                   fca40ed5fc112       storage-provisioner
	5cf894bf46807       cbb01a7bd410d                                                                                         7 minutes ago       Running             coredns                   1                   26513e2b92d66       coredns-6f6b679f8f-rfbz7
	60feb425249e9       ad83b2ca7b09e                                                                                         7 minutes ago       Running             kube-proxy                1                   8008f00487db3       kube-proxy-w4nt2
	2d90cfc5f1d77       38af8ddebf499                                                                                         7 minutes ago       Running             kube-vip                  0                   bda0d9ff673b9       kube-vip-ha-286000
	77cac41fb9bde       2e96e5913fc06                                                                                         7 minutes ago       Running             etcd                      1                   5ee84d4289ece       etcd-ha-286000
	bcd696090d544       1766f54c897f0                                                                                         7 minutes ago       Running             kube-scheduler            1                   97f04e9e38892       kube-scheduler-ha-286000
	64b3c5f995d8d       604f5db92eaa8                                                                                         7 minutes ago       Running             kube-apiserver            4                   8d4b6b4a23609       kube-apiserver-ha-286000
	257f5b412fe2a       045733566833c                                                                                         7 minutes ago       Exited              kube-controller-manager   1                   b20f8615dee49       kube-controller-manager-ha-286000
	63b366c951f2a       604f5db92eaa8                                                                                         9 minutes ago       Exited              kube-apiserver            3                   818ee6dafe6c9       kube-apiserver-ha-286000
	5bbe7a8cc492e       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   26 minutes ago      Exited              busybox                   0                   1873ade92edb9       busybox-7dff88458-dvmvk
	bcd7170b050a5       cbb01a7bd410d                                                                                         28 minutes ago      Exited              coredns                   0                   452942e267927       coredns-6f6b679f8f-rfbz7
	60d3d03e297ce       cbb01a7bd410d                                                                                         28 minutes ago      Exited              coredns                   0                   fbd84fb813c90       coredns-6f6b679f8f-2kqjf
	2677394dcd66f       kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166              29 minutes ago      Exited              kindnet-cni               0                   afaf657e85c32       kindnet-whqxb
	81f6c96d46494       ad83b2ca7b09e                                                                                         29 minutes ago      Exited              kube-proxy                0                   dfb822fc27b5e       kube-proxy-w4nt2
	f7b2e9efdd94f       1766f54c897f0                                                                                         29 minutes ago      Exited              kube-scheduler            0                   8e2b186980aff       kube-scheduler-ha-286000
	d8dadff6cec78       2e96e5913fc06                                                                                         29 minutes ago      Exited              etcd                      0                   cdb14ff7d8896       etcd-ha-286000
	
	
	==> coredns [3cf3b8e6c256] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:58071 - 29432 "HINFO IN 269282700017442046.6298598734389881778. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.104629212s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1710767206]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (16-Aug-2024 17:24:29.307) (total time: 30004ms):
	Trace[1710767206]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30004ms (17:24:59.311)
	Trace[1710767206]: [30.004743477s] [30.004743477s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1321835322]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (16-Aug-2024 17:24:29.307) (total time: 30005ms):
	Trace[1321835322]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30004ms (17:24:59.312)
	Trace[1321835322]: [30.005483265s] [30.005483265s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[816453993]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (16-Aug-2024 17:24:29.312) (total time: 30003ms):
	Trace[816453993]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (17:24:59.315)
	Trace[816453993]: [30.003551219s] [30.003551219s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [5cf894bf4680] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:51498 - 60294 "HINFO IN 6373854949728581283.8966112489703867485. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.072467092s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[817614149]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (16-Aug-2024 17:24:29.307) (total time: 30005ms):
	Trace[817614149]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30004ms (17:24:59.311)
	Trace[817614149]: [30.005208149s] [30.005208149s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1980986726]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (16-Aug-2024 17:24:29.307) (total time: 30005ms):
	Trace[1980986726]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30004ms (17:24:59.311)
	Trace[1980986726]: [30.005923834s] [30.005923834s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1722306438]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (16-Aug-2024 17:24:29.312) (total time: 30003ms):
	Trace[1722306438]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (17:24:59.315)
	Trace[1722306438]: [30.003847815s] [30.003847815s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [60d3d03e297c] <==
	[INFO] plugin/kubernetes: Trace[1595166943]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (16-Aug-2024 17:19:50.818) (total time: 11830ms):
	Trace[1595166943]: ---"Objects listed" error:Unauthorized 11830ms (17:20:02.649)
	Trace[1595166943]: [11.830466351s] [11.830466351s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=2678": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=2678": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: Trace[852140040]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (16-Aug-2024 17:20:06.131) (total time: 10521ms):
	Trace[852140040]: ---"Objects listed" error:Unauthorized 10521ms (17:20:16.652)
	Trace[852140040]: [10.521589006s] [10.521589006s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [bcd7170b050a] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: Trace[1786059905]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (16-Aug-2024 17:20:05.425) (total time: 11223ms):
	Trace[1786059905]: ---"Objects listed" error:Unauthorized 11223ms (17:20:16.649)
	Trace[1786059905]: [11.223878813s] [11.223878813s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[ERROR] plugin/kubernetes: Unexpected error when reading response body: http2: server sent GOAWAY and closed the connection; LastStreamID=5, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: unexpected error when reading response body. Please retry. Original error: http2: server sent GOAWAY and closed the connection; LastStreamID=5, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: Trace[1902597424]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (16-Aug-2024 17:20:18.397) (total time: 12364ms):
	Trace[1902597424]: ---"Objects listed" error:unexpected error when reading response body. Please retry. Original error: http2: server sent GOAWAY and closed the connection; LastStreamID=5, ErrCode=NO_ERROR, debug="" 12364ms (17:20:30.761)
	Trace[1902597424]: [12.364669513s] [12.364669513s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: unexpected error when reading response body. Please retry. Original error: http2: server sent GOAWAY and closed the connection; LastStreamID=5, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: Unexpected error when reading response body: http2: server sent GOAWAY and closed the connection; LastStreamID=5, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: unexpected error when reading response body. Please retry. Original error: http2: server sent GOAWAY and closed the connection; LastStreamID=5, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: unexpected error when reading response body. Please retry. Original error: http2: server sent GOAWAY and closed the connection; LastStreamID=5, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-286000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-286000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd
	                    minikube.k8s.io/name=ha-286000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_16T10_02_26_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 17:02:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-286000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 17:31:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 17:29:26 +0000   Fri, 16 Aug 2024 17:22:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 17:29:26 +0000   Fri, 16 Aug 2024 17:22:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 17:29:26 +0000   Fri, 16 Aug 2024 17:22:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 17:29:26 +0000   Fri, 16 Aug 2024 17:22:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.5
	  Hostname:    ha-286000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 2010adee17654cf9b80256054061ea5a
	  System UUID:                ad96408c-0000-0000-89eb-d74e5b68d297
	  Boot ID:                    bef9467e-8834-4316-92a2-f595c590a856
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-dvmvk              0 (0%)        0 (0%)      0 (0%)           0 (0%)         26m
	  kube-system                 coredns-6f6b679f8f-2kqjf             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     29m
	  kube-system                 coredns-6f6b679f8f-rfbz7             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     29m
	  kube-system                 etcd-ha-286000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         29m
	  kube-system                 kindnet-whqxb                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      29m
	  kube-system                 kube-apiserver-ha-286000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-controller-manager-ha-286000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-proxy-w4nt2                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-scheduler-ha-286000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-vip-ha-286000                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m22s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 7m20s                kube-proxy       
	  Normal  Starting                 29m                  kube-proxy       
	  Normal  NodeHasSufficientMemory  29m (x8 over 29m)    kubelet          Node ha-286000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  29m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     29m (x7 over 29m)    kubelet          Node ha-286000 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    29m (x8 over 29m)    kubelet          Node ha-286000 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 29m                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  29m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 29m                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           29m                  node-controller  Node ha-286000 event: Registered Node ha-286000 in Controller
	  Normal  RegisteredNode           28m                  node-controller  Node ha-286000 event: Registered Node ha-286000 in Controller
	  Normal  RegisteredNode           11m                  node-controller  Node ha-286000 event: Registered Node ha-286000 in Controller
	  Normal  NodeNotReady             9m40s (x2 over 11m)  node-controller  Node ha-286000 status is now: NodeNotReady
	  Normal  NodeHasSufficientMemory  9m19s (x3 over 29m)  kubelet          Node ha-286000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m19s (x3 over 29m)  kubelet          Node ha-286000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m19s (x3 over 29m)  kubelet          Node ha-286000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                9m19s (x3 over 29m)  kubelet          Node ha-286000 status is now: NodeReady
	  Normal  Starting                 8m1s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m (x8 over 8m)      kubelet          Node ha-286000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m (x8 over 8m)      kubelet          Node ha-286000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m (x7 over 8m)      kubelet          Node ha-286000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7m28s                node-controller  Node ha-286000 event: Registered Node ha-286000 in Controller
	  Normal  RegisteredNode           7m8s                 node-controller  Node ha-286000 event: Registered Node ha-286000 in Controller
	
	
	Name:               ha-286000-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-286000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd
	                    minikube.k8s.io/name=ha-286000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_16T10_03_22_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 17:03:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-286000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 17:31:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 17:29:28 +0000   Fri, 16 Aug 2024 17:22:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 17:29:28 +0000   Fri, 16 Aug 2024 17:22:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 17:29:28 +0000   Fri, 16 Aug 2024 17:22:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 17:29:28 +0000   Fri, 16 Aug 2024 17:22:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.6
	  Hostname:    ha-286000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 ee275d4bd6234ce08a6c7d60b8d19b43
	  System UUID:                f7634935-0000-0000-a751-ac72999da031
	  Boot ID:                    035257b9-18e7-4adc-8e61-b35126468d96
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-k9m92                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         26m
	  kube-system                 etcd-ha-286000-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         28m
	  kube-system                 kindnet-t9kjf                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      28m
	  kube-system                 kube-apiserver-ha-286000-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-controller-manager-ha-286000-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-proxy-pt669                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-scheduler-ha-286000-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-vip-ha-286000-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m14s                  kube-proxy       
	  Normal  Starting                 10m                    kube-proxy       
	  Normal  Starting                 28m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  28m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  28m (x8 over 28m)      kubelet          Node ha-286000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m (x8 over 28m)      kubelet          Node ha-286000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28m (x7 over 28m)      kubelet          Node ha-286000-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           28m                    node-controller  Node ha-286000-m02 event: Registered Node ha-286000-m02 in Controller
	  Normal  RegisteredNode           28m                    node-controller  Node ha-286000-m02 event: Registered Node ha-286000-m02 in Controller
	  Normal  NodeAllocatableEnforced  11m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 11m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)      kubelet          Node ha-286000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)      kubelet          Node ha-286000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x7 over 11m)      kubelet          Node ha-286000-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           11m                    node-controller  Node ha-286000-m02 event: Registered Node ha-286000-m02 in Controller
	  Normal  NodeNotReady             9m45s                  node-controller  Node ha-286000-m02 status is now: NodeNotReady
	  Normal  Starting                 7m41s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m41s (x8 over 7m41s)  kubelet          Node ha-286000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m41s (x8 over 7m41s)  kubelet          Node ha-286000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m41s (x7 over 7m41s)  kubelet          Node ha-286000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m41s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7m28s                  node-controller  Node ha-286000-m02 event: Registered Node ha-286000-m02 in Controller
	  Normal  RegisteredNode           7m8s                   node-controller  Node ha-286000-m02 event: Registered Node ha-286000-m02 in Controller
	
	
	Name:               ha-286000-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-286000-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd
	                    minikube.k8s.io/name=ha-286000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_16T10_17_22_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 17:17:21 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-286000-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 17:22:50 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 16 Aug 2024 17:22:27 +0000   Fri, 16 Aug 2024 17:25:02 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 16 Aug 2024 17:22:27 +0000   Fri, 16 Aug 2024 17:25:02 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 16 Aug 2024 17:22:27 +0000   Fri, 16 Aug 2024 17:25:02 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 16 Aug 2024 17:22:27 +0000   Fri, 16 Aug 2024 17:25:02 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.169.0.8
	  Hostname:    ha-286000-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 324c7ca05f77443abc4e861a3d5a5224
	  System UUID:                9a6645c6-0000-0000-8cbd-49b6a6a0383b
	  Boot ID:                    839ab079-775d-4939-ac8e-9fb255ba29df
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-99xmp    0 (0%)        0 (0%)      0 (0%)           0 (0%)         26m
	  kube-system                 kindnet-b9r6s              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      14m
	  kube-system                 kube-proxy-5qhgk           0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 14m                  kube-proxy       
	  Normal  NodeAllocatableEnforced  14m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           14m                  node-controller  Node ha-286000-m04 event: Registered Node ha-286000-m04 in Controller
	  Normal  RegisteredNode           14m                  node-controller  Node ha-286000-m04 event: Registered Node ha-286000-m04 in Controller
	  Normal  RegisteredNode           11m                  node-controller  Node ha-286000-m04 event: Registered Node ha-286000-m04 in Controller
	  Normal  NodeNotReady             9m45s (x2 over 11m)  node-controller  Node ha-286000-m04 status is now: NodeNotReady
	  Normal  NodeHasSufficientPID     9m23s (x4 over 14m)  kubelet          Node ha-286000-m04 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  9m23s (x4 over 14m)  kubelet          Node ha-286000-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m23s (x4 over 14m)  kubelet          Node ha-286000-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeReady                9m23s (x3 over 14m)  kubelet          Node ha-286000-m04 status is now: NodeReady
	  Normal  RegisteredNode           7m28s                node-controller  Node ha-286000-m04 event: Registered Node ha-286000-m04 in Controller
	  Normal  RegisteredNode           7m8s                 node-controller  Node ha-286000-m04 event: Registered Node ha-286000-m04 in Controller
	  Normal  NodeNotReady             6m48s                node-controller  Node ha-286000-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.035803] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20200925/tbprint-173)
	[  +0.008121] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	[  +5.699152] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000002] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.007082] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.882621] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.230843] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.349832] systemd-fstab-generator[472]: Ignoring "noauto" option for root device
	[  +0.095939] systemd-fstab-generator[484]: Ignoring "noauto" option for root device
	[  +2.008291] systemd-fstab-generator[1111]: Ignoring "noauto" option for root device
	[  +0.258306] systemd-fstab-generator[1147]: Ignoring "noauto" option for root device
	[  +0.099664] systemd-fstab-generator[1159]: Ignoring "noauto" option for root device
	[  +0.061191] kauditd_printk_skb: 123 callbacks suppressed
	[  +0.060084] systemd-fstab-generator[1173]: Ignoring "noauto" option for root device
	[  +2.467356] systemd-fstab-generator[1389]: Ignoring "noauto" option for root device
	[  +0.100054] systemd-fstab-generator[1401]: Ignoring "noauto" option for root device
	[  +0.107009] systemd-fstab-generator[1413]: Ignoring "noauto" option for root device
	[  +0.132145] systemd-fstab-generator[1428]: Ignoring "noauto" option for root device
	[  +0.458193] systemd-fstab-generator[1593]: Ignoring "noauto" option for root device
	[  +6.918226] kauditd_printk_skb: 190 callbacks suppressed
	[Aug16 17:24] kauditd_printk_skb: 40 callbacks suppressed
	[ +21.525016] kauditd_printk_skb: 82 callbacks suppressed
	
	
	==> etcd [77cac41fb9bd] <==
	{"level":"info","ts":"2024-08-16T17:24:16.876969Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"9633c02797b6d34"}
	{"level":"info","ts":"2024-08-16T17:24:16.894983Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"9633c02797b6d34","stream-type":"stream Message"}
	{"level":"info","ts":"2024-08-16T17:24:16.895123Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"9633c02797b6d34"}
	{"level":"info","ts":"2024-08-16T17:24:16.966205Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"9633c02797b6d34","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-08-16T17:24:16.966225Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"9633c02797b6d34"}
	{"level":"warn","ts":"2024-08-16T17:24:17.366286Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740435865607170,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-08-16T17:24:17.388375Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"9633c02797b6d34","rtt":"0s","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-16T17:24:17.389606Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"9633c02797b6d34","rtt":"0s","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"info","ts":"2024-08-16T17:24:17.664302Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 3, index: 4621, vote: b8c6c7563d17d844] cast MsgPreVote for 9633c02797b6d34 [logterm: 3, index: 4621] at term 3"}
	{"level":"info","ts":"2024-08-16T17:24:17.667262Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [term: 3] received a MsgVote message with higher term from 9633c02797b6d34 [term: 4]"}
	{"level":"info","ts":"2024-08-16T17:24:17.667420Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became follower at term 4"}
	{"level":"info","ts":"2024-08-16T17:24:17.667474Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 3, index: 4621, vote: 0] cast MsgVote for 9633c02797b6d34 [logterm: 3, index: 4621] at term 4"}
	{"level":"info","ts":"2024-08-16T17:24:17.668493Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b8c6c7563d17d844 elected leader 9633c02797b6d34 at term 4"}
	{"level":"warn","ts":"2024-08-16T17:24:17.668980Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"3.305918651s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: leader changed"}
	{"level":"info","ts":"2024-08-16T17:24:17.669025Z","caller":"traceutil/trace.go:171","msg":"trace[958839912] range","detail":"{range_begin:; range_end:; }","duration":"3.306272649s","start":"2024-08-16T17:24:14.362747Z","end":"2024-08-16T17:24:17.669020Z","steps":["trace[958839912] 'agreement among raft nodes before linearized reading'  (duration: 3.305917726s)"],"step_count":1}
	{"level":"error","ts":"2024-08-16T17:24:17.669050Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[+]serializable_read ok\n[-]linearizable_read failed: etcdserver: leader changed\n[+]data_corruption ok\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	{"level":"info","ts":"2024-08-16T17:24:17.672550Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"b8c6c7563d17d844","local-member-attributes":"{Name:ha-286000 ClientURLs:[https://192.169.0.5:2379]}","request-path":"/0/members/b8c6c7563d17d844/attributes","cluster-id":"b73189effde9bc63","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-16T17:24:17.672690Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-16T17:24:17.673076Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-16T17:24:17.673114Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-16T17:24:17.672747Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-16T17:24:17.675839Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-16T17:24:17.676355Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-16T17:24:17.676582Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.5:2379"}
	{"level":"info","ts":"2024-08-16T17:24:17.677166Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [d8dadff6cec7] <==
	{"level":"info","ts":"2024-08-16T17:23:23.603134Z","caller":"traceutil/trace.go:171","msg":"trace[1695899457] range","detail":"{range_begin:/registry/persistentvolumeclaims/; range_end:/registry/persistentvolumeclaims0; }","duration":"7.280387387s","start":"2024-08-16T17:23:16.322744Z","end":"2024-08-16T17:23:23.603132Z","steps":["trace[1695899457] 'agreement among raft nodes before linearized reading'  (duration: 7.280377262s)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T17:23:23.603145Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T17:23:16.322710Z","time spent":"7.280431347s","remote":"127.0.0.1:56178","response type":"/etcdserverpb.KV/Range","request count":0,"request size":72,"response count":0,"response size":0,"request content":"key:\"/registry/persistentvolumeclaims/\" range_end:\"/registry/persistentvolumeclaims0\" count_only:true "}
	2024/08/16 17:23:23 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-16T17:23:23.603197Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"3.204231928s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/validatingadmissionpolicies/\" range_end:\"/registry/validatingadmissionpolicies0\" count_only:true ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-08-16T17:23:23.603208Z","caller":"traceutil/trace.go:171","msg":"trace[821531247] range","detail":"{range_begin:/registry/validatingadmissionpolicies/; range_end:/registry/validatingadmissionpolicies0; }","duration":"3.204245539s","start":"2024-08-16T17:23:20.398959Z","end":"2024-08-16T17:23:23.603205Z","steps":["trace[821531247] 'agreement among raft nodes before linearized reading'  (duration: 3.204231749s)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T17:23:23.603218Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T17:23:20.398944Z","time spent":"3.204271101s","remote":"127.0.0.1:56532","response type":"/etcdserverpb.KV/Range","request count":0,"request size":82,"response count":0,"response size":0,"request content":"key:\"/registry/validatingadmissionpolicies/\" range_end:\"/registry/validatingadmissionpolicies0\" count_only:true "}
	2024/08/16 17:23:23 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-16T17:23:23.604807Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740435533448225,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-08-16T17:23:23.605017Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 3"}
	{"level":"info","ts":"2024-08-16T17:23:23.605028Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-08-16T17:23:23.605034Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 3"}
	{"level":"info","ts":"2024-08-16T17:23:23.605042Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 3, index: 4621] sent MsgPreVote request to 9633c02797b6d34 at term 3"}
	{"level":"warn","ts":"2024-08-16T17:23:23.646548Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-16T17:23:23.646617Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-16T17:23:23.646652Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"b8c6c7563d17d844","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-08-16T17:23:23.647836Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"9633c02797b6d34"}
	{"level":"info","ts":"2024-08-16T17:23:23.647877Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"9633c02797b6d34"}
	{"level":"info","ts":"2024-08-16T17:23:23.647896Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"9633c02797b6d34"}
	{"level":"info","ts":"2024-08-16T17:23:23.648043Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"9633c02797b6d34"}
	{"level":"info","ts":"2024-08-16T17:23:23.648105Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"9633c02797b6d34"}
	{"level":"info","ts":"2024-08-16T17:23:23.648130Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"9633c02797b6d34"}
	{"level":"info","ts":"2024-08-16T17:23:23.648158Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"9633c02797b6d34"}
	{"level":"info","ts":"2024-08-16T17:23:23.650448Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-08-16T17:23:23.650508Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-08-16T17:23:23.650516Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-286000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.5:2380"],"advertise-client-urls":["https://192.169.0.5:2379"]}
	
	
	==> kernel <==
	 17:31:50 up 8 min,  0 users,  load average: 0.25, 0.14, 0.05
	Linux ha-286000 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [2677394dcd66] <==
	I0816 17:22:35.224951       1 main.go:322] Node ha-286000-m04 has CIDR [10.244.2.0/24] 
	I0816 17:22:45.231619       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0816 17:22:45.231806       1 main.go:299] handling current node
	I0816 17:22:45.231910       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0816 17:22:45.231994       1 main.go:322] Node ha-286000-m02 has CIDR [10.244.1.0/24] 
	I0816 17:22:45.232158       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0816 17:22:45.232263       1 main.go:322] Node ha-286000-m04 has CIDR [10.244.2.0/24] 
	I0816 17:22:55.225733       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0816 17:22:55.225894       1 main.go:299] handling current node
	I0816 17:22:55.225954       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0816 17:22:55.226004       1 main.go:322] Node ha-286000-m02 has CIDR [10.244.1.0/24] 
	I0816 17:22:55.226143       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0816 17:22:55.226223       1 main.go:322] Node ha-286000-m04 has CIDR [10.244.2.0/24] 
	I0816 17:23:05.224175       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0816 17:23:05.224416       1 main.go:299] handling current node
	I0816 17:23:05.224540       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0816 17:23:05.224830       1 main.go:322] Node ha-286000-m02 has CIDR [10.244.1.0/24] 
	I0816 17:23:05.225112       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0816 17:23:05.225305       1 main.go:322] Node ha-286000-m04 has CIDR [10.244.2.0/24] 
	I0816 17:23:15.226037       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0816 17:23:15.226204       1 main.go:299] handling current node
	I0816 17:23:15.226257       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0816 17:23:15.226357       1 main.go:322] Node ha-286000-m02 has CIDR [10.244.1.0/24] 
	I0816 17:23:15.226471       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0816 17:23:15.226617       1 main.go:322] Node ha-286000-m04 has CIDR [10.244.2.0/24] 
	
	
	==> kindnet [f9023c4cc7d0] <==
	I0816 17:31:10.463804       1 main.go:322] Node ha-286000-m02 has CIDR [10.244.1.0/24] 
	I0816 17:31:20.461164       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0816 17:31:20.461261       1 main.go:322] Node ha-286000-m04 has CIDR [10.244.2.0/24] 
	I0816 17:31:20.461699       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0816 17:31:20.461813       1 main.go:299] handling current node
	I0816 17:31:20.461829       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0816 17:31:20.461838       1 main.go:322] Node ha-286000-m02 has CIDR [10.244.1.0/24] 
	I0816 17:31:30.455126       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0816 17:31:30.455158       1 main.go:299] handling current node
	I0816 17:31:30.455174       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0816 17:31:30.455183       1 main.go:322] Node ha-286000-m02 has CIDR [10.244.1.0/24] 
	I0816 17:31:30.455287       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0816 17:31:30.455335       1 main.go:322] Node ha-286000-m04 has CIDR [10.244.2.0/24] 
	I0816 17:31:40.464046       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0816 17:31:40.464134       1 main.go:299] handling current node
	I0816 17:31:40.464169       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0816 17:31:40.464194       1 main.go:322] Node ha-286000-m02 has CIDR [10.244.1.0/24] 
	I0816 17:31:40.464480       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0816 17:31:40.464649       1 main.go:322] Node ha-286000-m04 has CIDR [10.244.2.0/24] 
	I0816 17:31:50.461726       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0816 17:31:50.461779       1 main.go:299] handling current node
	I0816 17:31:50.461793       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0816 17:31:50.461799       1 main.go:322] Node ha-286000-m02 has CIDR [10.244.1.0/24] 
	I0816 17:31:50.461946       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0816 17:31:50.461974       1 main.go:322] Node ha-286000-m04 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [63b366c951f2] <==
	W0816 17:23:23.632159       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:23:23.632202       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:23:23.632235       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:23:23.632264       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:23:23.632290       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:23:23.632317       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:23:23.632341       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:23:23.632369       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:23:23.632396       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:23:23.632421       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:23:23.632450       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:23:23.632476       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:23:23.632504       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:23:23.632531       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:23:23.632590       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0816 17:23:23.633069       1 controller.go:195] "Failed to update lease" err="rpc error: code = Unknown desc = malformed header: missing HTTP content-type"
	W0816 17:23:23.633101       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:23:23.633121       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:23:23.633137       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:23:23.633159       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:23:23.633175       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:23:23.633190       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:23:23.633205       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:23:23.633221       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:23:23.633236       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [64b3c5f995d8] <==
	I0816 17:24:18.567658       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0816 17:24:18.568203       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0816 17:24:18.568352       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0816 17:24:18.635954       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0816 17:24:18.636020       1 policy_source.go:224] refreshing policies
	I0816 17:24:18.661089       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0816 17:24:18.661333       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0816 17:24:18.665098       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0816 17:24:18.665805       1 shared_informer.go:320] Caches are synced for configmaps
	I0816 17:24:18.666159       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0816 17:24:18.666396       1 aggregator.go:171] initial CRD sync complete...
	I0816 17:24:18.666573       1 autoregister_controller.go:144] Starting autoregister controller
	I0816 17:24:18.669371       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0816 17:24:18.669507       1 cache.go:39] Caches are synced for autoregister controller
	I0816 17:24:18.669649       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0816 17:24:18.673264       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0816 17:24:18.673925       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0816 17:24:18.676414       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0816 17:24:18.681474       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	E0816 17:24:18.693871       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0816 17:24:18.734462       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0816 17:24:19.567976       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0816 17:24:19.905347       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.5 192.169.0.6]
	I0816 17:24:19.907243       1 controller.go:615] quota admission added evaluator for: endpoints
	I0816 17:24:19.913024       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [257f5b412fe2] <==
	I0816 17:23:57.992802       1 serving.go:386] Generated self-signed cert in-memory
	I0816 17:23:58.299343       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0816 17:23:58.299552       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 17:23:58.302121       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0816 17:23:58.302479       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0816 17:23:58.302580       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0816 17:23:58.303517       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0816 17:24:18.587870       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: forbidden: User \"system:kube-controller-manager\" cannot get path \"/healthz\""
	
	
	==> kube-controller-manager [88937b4d9b3f] <==
	I0816 17:24:42.951528       1 shared_informer.go:320] Caches are synced for garbage collector
	I0816 17:24:42.974213       1 shared_informer.go:320] Caches are synced for garbage collector
	I0816 17:24:42.974443       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0816 17:25:02.082814       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000-m04"
	I0816 17:25:02.095196       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000-m04"
	I0816 17:25:02.127968       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="39.027587ms"
	I0816 17:25:02.128030       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="24.447µs"
	I0816 17:25:02.643392       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000-m04"
	I0816 17:25:07.139420       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000-m04"
	I0816 17:25:08.371423       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-mqfxs EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-mqfxs\": the object has been modified; please apply your changes to the latest version and try again"
	I0816 17:25:08.371686       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"fd9fe61f-ffc6-4f61-848a-91dfce599e44", APIVersion:"v1", ResourceVersion:"301", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-mqfxs EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-mqfxs": the object has been modified; please apply your changes to the latest version and try again
	I0816 17:25:08.374007       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="29.442371ms"
	I0816 17:25:08.393173       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-mqfxs EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-mqfxs\": the object has been modified; please apply your changes to the latest version and try again"
	I0816 17:25:08.393688       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"fd9fe61f-ffc6-4f61-848a-91dfce599e44", APIVersion:"v1", ResourceVersion:"301", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-mqfxs EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-mqfxs": the object has been modified; please apply your changes to the latest version and try again
	I0816 17:25:08.408690       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="34.610733ms"
	I0816 17:25:08.408944       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="211.164µs"
	I0816 17:29:26.116788       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000"
	I0816 17:29:28.983013       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-286000-m02"
	I0816 17:30:02.644115       1 taint_eviction.go:111] "Deleting pod" logger="taint-eviction-controller" controller="taint-eviction-controller" pod="default/busybox-7dff88458-99xmp"
	I0816 17:30:02.656658       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="30.99µs"
	I0816 17:30:02.708557       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="47.170668ms"
	I0816 17:30:02.730962       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="22.359755ms"
	E0816 17:30:02.731009       1 replica_set.go:560] "Unhandled Error" err="sync \"default/busybox-7dff88458\" failed with Operation cannot be fulfilled on replicasets.apps \"busybox-7dff88458\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0816 17:30:02.732470       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="44.954µs"
	I0816 17:30:02.738153       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="43.511µs"
	
	
	==> kube-proxy [60feb425249e] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0816 17:24:29.419881       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0816 17:24:29.442807       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E0816 17:24:29.442895       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0816 17:24:29.500213       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0816 17:24:29.500259       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0816 17:24:29.500279       1 server_linux.go:169] "Using iptables Proxier"
	I0816 17:24:29.504235       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0816 17:24:29.504982       1 server.go:483] "Version info" version="v1.31.0"
	I0816 17:24:29.505010       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 17:24:29.508282       1 config.go:197] "Starting service config controller"
	I0816 17:24:29.508363       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0816 17:24:29.508991       1 config.go:326] "Starting node config controller"
	I0816 17:24:29.509044       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0816 17:24:29.510479       1 config.go:104] "Starting endpoint slice config controller"
	I0816 17:24:29.510508       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0816 17:24:29.609193       1 shared_informer.go:320] Caches are synced for node config
	I0816 17:24:29.609332       1 shared_informer.go:320] Caches are synced for service config
	I0816 17:24:29.610541       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [81f6c96d4649] <==
	E0816 17:18:57.696982       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2592\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 17:19:00.770881       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-286000&resourceVersion=2600": dial tcp 192.169.0.254:8443: connect: no route to host
	E0816 17:19:00.770973       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-286000&resourceVersion=2600\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 17:19:00.771455       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2678": dial tcp 192.169.0.254:8443: connect: no route to host
	E0816 17:19:00.771540       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2678\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 17:19:03.838026       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2592": dial tcp 192.169.0.254:8443: connect: no route to host
	E0816 17:19:03.838287       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2592\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 17:19:09.980567       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-286000&resourceVersion=2600": dial tcp 192.169.0.254:8443: connect: no route to host
	E0816 17:19:09.980625       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-286000&resourceVersion=2600\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 17:19:13.053000       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2592": dial tcp 192.169.0.254:8443: connect: no route to host
	E0816 17:19:13.053145       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2592\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 17:19:16.125305       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2678": dial tcp 192.169.0.254:8443: connect: no route to host
	E0816 17:19:16.125738       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2678\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 17:19:28.413017       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-286000&resourceVersion=2600": dial tcp 192.169.0.254:8443: connect: no route to host
	E0816 17:19:28.413242       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-286000&resourceVersion=2600\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 17:19:37.633251       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2592": dial tcp 192.169.0.254:8443: connect: no route to host
	E0816 17:19:37.633353       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2592\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 17:19:37.633417       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2678": dial tcp 192.169.0.254:8443: connect: no route to host
	E0816 17:19:37.633437       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2678\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 17:19:56.059814       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-286000&resourceVersion=2600": dial tcp 192.169.0.254:8443: connect: no route to host
	E0816 17:19:56.059845       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-286000&resourceVersion=2600\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 17:20:17.564736       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2592": dial tcp 192.169.0.254:8443: connect: no route to host
	E0816 17:20:17.564831       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2592\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 17:20:17.565065       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2678": dial tcp 192.169.0.254:8443: connect: no route to host
	E0816 17:20:17.565112       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2678\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-scheduler [bcd696090d54] <==
	I0816 17:23:57.780845       1 serving.go:386] Generated self-signed cert in-memory
	W0816 17:24:08.860542       1 authentication.go:370] Error looking up in-cluster authentication configuration: Get "https://192.169.0.5:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": net/http: TLS handshake timeout
	W0816 17:24:08.860585       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0816 17:24:08.860591       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0816 17:24:18.591414       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0816 17:24:18.591456       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 17:24:18.606860       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0816 17:24:18.608591       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0816 17:24:18.608692       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0816 17:24:18.609554       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0816 17:24:18.708922       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [f7b2e9efdd94] <==
	W0816 17:20:20.013337       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0816 17:20:20.013508       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0816 17:20:22.503962       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0816 17:20:22.504039       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 17:20:23.117539       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0816 17:20:23.117759       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0816 17:20:24.619908       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0816 17:20:24.620160       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 17:20:32.932878       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0816 17:20:32.932925       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 17:20:34.100467       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0816 17:20:34.100511       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 17:20:36.209664       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0816 17:20:36.209784       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 17:20:36.615553       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0816 17:20:36.615615       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 17:20:37.131529       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0816 17:20:37.131621       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 17:20:37.319247       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 17:20:37.319312       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 17:20:39.232294       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0816 17:20:39.232326       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0816 17:21:33.466903       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.169.0.5:8443/apis/policy/v1/poddisruptionbudgets?resourceVersion=2660": dial tcp 192.169.0.5:8443: connect: connection refused
	E0816 17:21:33.467202       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.169.0.5:8443/apis/policy/v1/poddisruptionbudgets?resourceVersion=2660\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	E0816 17:23:23.612582       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Aug 16 17:27:50 ha-286000 kubelet[1600]: E0816 17:27:50.045671    1600 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 16 17:27:50 ha-286000 kubelet[1600]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 16 17:27:50 ha-286000 kubelet[1600]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 16 17:27:50 ha-286000 kubelet[1600]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 16 17:27:50 ha-286000 kubelet[1600]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 16 17:28:50 ha-286000 kubelet[1600]: E0816 17:28:50.046455    1600 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 16 17:28:50 ha-286000 kubelet[1600]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 16 17:28:50 ha-286000 kubelet[1600]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 16 17:28:50 ha-286000 kubelet[1600]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 16 17:28:50 ha-286000 kubelet[1600]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 16 17:29:50 ha-286000 kubelet[1600]: E0816 17:29:50.046582    1600 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 16 17:29:50 ha-286000 kubelet[1600]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 16 17:29:50 ha-286000 kubelet[1600]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 16 17:29:50 ha-286000 kubelet[1600]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 16 17:29:50 ha-286000 kubelet[1600]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 16 17:30:50 ha-286000 kubelet[1600]: E0816 17:30:50.046207    1600 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 16 17:30:50 ha-286000 kubelet[1600]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 16 17:30:50 ha-286000 kubelet[1600]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 16 17:30:50 ha-286000 kubelet[1600]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 16 17:30:50 ha-286000 kubelet[1600]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 16 17:31:50 ha-286000 kubelet[1600]: E0816 17:31:50.050002    1600 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 16 17:31:50 ha-286000 kubelet[1600]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 16 17:31:50 ha-286000 kubelet[1600]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 16 17:31:50 ha-286000 kubelet[1600]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 16 17:31:50 ha-286000 kubelet[1600]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-286000 -n ha-286000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-286000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-7dff88458-pcqtw
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/StopCluster]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-286000 describe pod busybox-7dff88458-pcqtw
helpers_test.go:282: (dbg) kubectl --context ha-286000 describe pod busybox-7dff88458-pcqtw:

                                                
                                                
-- stdout --
	Name:             busybox-7dff88458-pcqtw
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7dff88458
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7dff88458
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9ff7z (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-9ff7z:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age   From               Message
	  ----     ------            ----  ----               -------
	  Warning  FailedScheduling  109s  default-scheduler  0/3 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  110s  default-scheduler  0/3 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (64.17s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (136.73s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-873000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit 
E0816 10:36:32.705398    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/addons-725000/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p mount-start-1-873000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit : exit status 80 (2m16.649209836s)

                                                
                                                
-- stdout --
	* [mount-start-1-873000] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19461
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1276/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1276/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting minikube without Kubernetes in cluster mount-start-1-873000
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "mount-start-1-873000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 7e:2:0:c3:a7:a2
	* Failed to start hyperkit VM. Running "minikube delete -p mount-start-1-873000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for b2:3:36:b:d5:4b
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for b2:3:36:b:d5:4b
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-amd64 start -p mount-start-1-873000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-1-873000 -n mount-start-1-873000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-1-873000 -n mount-start-1-873000: exit status 7 (79.154827ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 10:38:09.233515    6462 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0816 10:38:09.233538    6462 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-873000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMountStart/serial/StartWithMountFirst (136.73s)

                                                
                                    
x
+
TestScheduledStopUnix (142.11s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-078000 --memory=2048 --driver=hyperkit 
E0816 10:50:35.633100    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/functional-373000/client.crt: no such file or directory" logger="UnhandledError"
E0816 10:51:32.720212    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/addons-725000/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p scheduled-stop-078000 --memory=2048 --driver=hyperkit : exit status 80 (2m16.763228475s)

                                                
                                                
-- stdout --
	* [scheduled-stop-078000] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19461
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1276/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1276/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "scheduled-stop-078000" primary control-plane node in "scheduled-stop-078000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "scheduled-stop-078000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 56:19:a8:97:cf:5c
	* Failed to start hyperkit VM. Running "minikube delete -p scheduled-stop-078000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 3a:9a:2d:50:9e:6f
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 3a:9a:2d:50:9e:6f
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-078000] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19461
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1276/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1276/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "scheduled-stop-078000" primary control-plane node in "scheduled-stop-078000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "scheduled-stop-078000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 56:19:a8:97:cf:5c
	* Failed to start hyperkit VM. Running "minikube delete -p scheduled-stop-078000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 3a:9a:2d:50:9e:6f
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 3a:9a:2d:50:9e:6f
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-08-16 10:52:19.666347 -0700 PDT m=+3882.310603521
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-078000 -n scheduled-stop-078000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-078000 -n scheduled-stop-078000: exit status 7 (79.468342ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 10:52:19.744117    7322 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0816 10:52:19.744141    7322 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-078000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "scheduled-stop-078000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-078000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p scheduled-stop-078000: (5.264380085s)
--- FAIL: TestScheduledStopUnix (142.11s)

                                                
                                    
x
+
TestPause/serial/Start (141.02s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-220000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit 
E0816 11:30:35.700808    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/functional-373000/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p pause-220000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit : exit status 80 (2m20.937068219s)

                                                
                                                
-- stdout --
	* [pause-220000] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19461
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1276/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1276/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "pause-220000" primary control-plane node in "pause-220000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "pause-220000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ce:55:fb:fa:e8:7
	* Failed to start hyperkit VM. Running "minikube delete -p pause-220000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ee:3c:60:6f:3:1d
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ee:3c:60:6f:3:1d
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-amd64 start -p pause-220000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p pause-220000 -n pause-220000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p pause-220000 -n pause-220000: exit status 7 (80.858595ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 11:32:53.238715   10211 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0816 11:32:53.238737   10211 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-220000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestPause/serial/Start (141.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (7201.735s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-800000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.20.0
E0816 11:44:21.780138    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/false-300000/client.crt: no such file or directory" logger="UnhandledError"
E0816 11:44:24.288048    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/flannel-300000/client.crt: no such file or directory" logger="UnhandledError"
E0816 11:44:24.294450    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/flannel-300000/client.crt: no such file or directory" logger="UnhandledError"
E0816 11:44:24.305964    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/flannel-300000/client.crt: no such file or directory" logger="UnhandledError"
E0816 11:44:24.327516    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/flannel-300000/client.crt: no such file or directory" logger="UnhandledError"
E0816 11:44:24.369748    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/flannel-300000/client.crt: no such file or directory" logger="UnhandledError"
E0816 11:44:24.452380    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/flannel-300000/client.crt: no such file or directory" logger="UnhandledError"
E0816 11:44:24.615804    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/flannel-300000/client.crt: no such file or directory" logger="UnhandledError"
E0816 11:44:24.939402    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/flannel-300000/client.crt: no such file or directory" logger="UnhandledError"
E0816 11:44:25.581035    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/flannel-300000/client.crt: no such file or directory" logger="UnhandledError"
E0816 11:44:26.862915    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/flannel-300000/client.crt: no such file or directory" logger="UnhandledError"
E0816 11:44:29.230588    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/auto-300000/client.crt: no such file or directory" logger="UnhandledError"
E0816 11:44:29.425047    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/flannel-300000/client.crt: no such file or directory" logger="UnhandledError"
E0816 11:44:34.546660    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/flannel-300000/client.crt: no such file or directory" logger="UnhandledError"
E0816 11:44:44.764578    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/enable-default-cni-300000/client.crt: no such file or directory" logger="UnhandledError"
E0816 11:44:44.788313    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/flannel-300000/client.crt: no such file or directory" logger="UnhandledError"
E0816 11:44:49.894786    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/custom-flannel-300000/client.crt: no such file or directory" logger="UnhandledError"
E0816 11:44:56.936174    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/auto-300000/client.crt: no such file or directory" logger="UnhandledError"
E0816 11:45:05.270117    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/flannel-300000/client.crt: no such file or directory" logger="UnhandledError"
E0816 11:45:18.787641    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/functional-373000/client.crt: no such file or directory" logger="UnhandledError"
E0816 11:45:31.642259    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/bridge-300000/client.crt: no such file or directory" logger="UnhandledError"
E0816 11:45:31.648711    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/bridge-300000/client.crt: no such file or directory" logger="UnhandledError"
E0816 11:45:31.660534    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/bridge-300000/client.crt: no such file or directory" logger="UnhandledError"
E0816 11:45:31.682003    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/bridge-300000/client.crt: no such file or directory" logger="UnhandledError"
E0816 11:45:31.724131    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/bridge-300000/client.crt: no such file or directory" logger="UnhandledError"
E0816 11:45:31.807650    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/bridge-300000/client.crt: no such file or directory" logger="UnhandledError"
E0816 11:45:31.971237    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/bridge-300000/client.crt: no such file or directory" logger="UnhandledError"
E0816 11:45:32.294759    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/bridge-300000/client.crt: no such file or directory" logger="UnhandledError"
E0816 11:45:32.938128    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/bridge-300000/client.crt: no such file or directory" logger="UnhandledError"
E0816 11:45:34.220098    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/bridge-300000/client.crt: no such file or directory" logger="UnhandledError"
E0816 11:45:35.703919    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/functional-373000/client.crt: no such file or directory" logger="UnhandledError"
E0816 11:45:36.072307    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/kindnet-300000/client.crt: no such file or directory" logger="UnhandledError"
E0816 11:45:36.782421    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/bridge-300000/client.crt: no such file or directory" logger="UnhandledError"
E0816 11:45:41.905836    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/bridge-300000/client.crt: no such file or directory" logger="UnhandledError"
E0816 11:45:43.702032    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/false-300000/client.crt: no such file or directory" logger="UnhandledError"
E0816 11:45:46.233136    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/flannel-300000/client.crt: no such file or directory" logger="UnhandledError"
E0816 11:45:48.822050    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/kubenet-300000/client.crt: no such file or directory" logger="UnhandledError"
E0816 11:45:48.828646    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/kubenet-300000/client.crt: no such file or directory" logger="UnhandledError"
E0816 11:45:48.841454    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/kubenet-300000/client.crt: no such file or directory" logger="UnhandledError"
E0816 11:45:48.863412    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/kubenet-300000/client.crt: no such file or directory" logger="UnhandledError"
E0816 11:45:48.906805    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/kubenet-300000/client.crt: no such file or directory" logger="UnhandledError"
E0816 11:45:48.988378    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/kubenet-300000/client.crt: no such file or directory" logger="UnhandledError"
E0816 11:45:49.151673    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/kubenet-300000/client.crt: no such file or directory" logger="UnhandledError"
E0816 11:45:49.473469    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/kubenet-300000/client.crt: no such file or directory" logger="UnhandledError"
E0816 11:45:50.116739    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/kubenet-300000/client.crt: no such file or directory" logger="UnhandledError"
E0816 11:45:51.399466    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/kubenet-300000/client.crt: no such file or directory" logger="UnhandledError"
E0816 11:45:52.148285    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/bridge-300000/client.crt: no such file or directory" logger="UnhandledError"
E0816 11:45:53.961787    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/kubenet-300000/client.crt: no such file or directory" logger="UnhandledError"
E0816 11:45:59.083659    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/kubenet-300000/client.crt: no such file or directory" logger="UnhandledError"
E0816 11:46:03.775654    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/kindnet-300000/client.crt: no such file or directory" logger="UnhandledError"
E0816 11:46:06.686557    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/enable-default-cni-300000/client.crt: no such file or directory" logger="UnhandledError"
E0816 11:46:08.787239    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/calico-300000/client.crt: no such file or directory" logger="UnhandledError"
E0816 11:46:09.325054    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/kubenet-300000/client.crt: no such file or directory" logger="UnhandledError"
E0816 11:46:12.629481    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/bridge-300000/client.crt: no such file or directory" logger="UnhandledError"
E0816 11:46:29.806696    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/kubenet-300000/client.crt: no such file or directory" logger="UnhandledError"
E0816 11:46:32.792206    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/addons-725000/client.crt: no such file or directory" logger="UnhandledError"
E0816 11:46:36.493240    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/calico-300000/client.crt: no such file or directory" logger="UnhandledError"
E0816 11:46:53.592292    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/bridge-300000/client.crt: no such file or directory" logger="UnhandledError"
E0816 11:47:06.024120    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/custom-flannel-300000/client.crt: no such file or directory" logger="UnhandledError"
E0816 11:47:08.154228    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/flannel-300000/client.crt: no such file or directory" logger="UnhandledError"
E0816 11:47:10.768076    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/kubenet-300000/client.crt: no such file or directory" logger="UnhandledError"
E0816 11:47:33.734219    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/custom-flannel-300000/client.crt: no such file or directory" logger="UnhandledError"
panic: test timed out after 2h0m0s
running tests:
	TestNetworkPlugins (53m19s)
	TestNetworkPlugins/group (6m19s)
	TestStartStop (15m18s)
	TestStartStop/group/no-preload (6m19s)
	TestStartStop/group/no-preload/serial (6m19s)
	TestStartStop/group/no-preload/serial/SecondStart (4m37s)
	TestStartStop/group/old-k8s-version (6m37s)
	TestStartStop/group/old-k8s-version/serial (6m37s)
	TestStartStop/group/old-k8s-version/serial/SecondStart (3m28s)

                                                
                                                
goroutine 3832 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2366 +0x385
created by time.goFunc
	/usr/local/go/src/time/sleep.go:177 +0x2d

                                                
                                                
goroutine 1 [chan receive, 18 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc00080ed00, 0xc00081fbb0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
testing.runTests(0xc000a125a0, {0xa5cb0e0, 0x2a, 0x2a}, {0x5c066c5?, 0x792b8d8?, 0xa5ee760?})
	/usr/local/go/src/testing/testing.go:2159 +0x445
testing.(*M).Run(0xc0009663c0)
	/usr/local/go/src/testing/testing.go:2027 +0x68b
k8s.io/minikube/test/integration.TestMain(0xc0009663c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:131 +0x195

                                                
                                                
goroutine 9 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc000887c80)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 2675 [chan receive, 14 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00197a7c0, 0xc000058d80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:150 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2689
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2193 [chan receive, 16 minutes]:
testing.(*T).Run(0xc001b96340, {0x78d127d?, 0x5c79c13?}, 0x902e240)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop(0xc001b96340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc001b96340, 0x902e0e0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 156 [chan receive, 117 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000d86580, 0xc000058d80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:150 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 154
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 161 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc000d86550, 0x2d)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xc000bedd80?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x907c6e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/queue.go:282 +0x98
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000d86580)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000888010, {0x903c320, 0xc000b901b0}, 0x1, 0xc000058d80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000888010, 0x3b9aca00, 0x0, 0x1, 0xc000058d80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 156
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:143 +0x1ef

                                                
                                                
goroutine 86 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.130.1/klog.go:1141 +0x117
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 85
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.130.1/klog.go:1137 +0x171

                                                
                                                
goroutine 3054 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3053
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3499 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x9062690, 0xc000058d80}, 0xc001574750, 0xc001574798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x9062690, 0xc000058d80}, 0x0?, 0xc001574750, 0xc001574798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x9062690?, 0xc000058d80?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x6180b25?, 0xc0004e3e00?, 0x9058d40?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3515
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:145 +0x29a

                                                
                                                
goroutine 2151 [chan receive, 55 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000862b00, 0xc000058d80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:150 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2095
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3141 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x9058d40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3165
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 163 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 162
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2892 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc0006fe710, 0x12)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xc00028ad80?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x907c6e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/queue.go:282 +0x98
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0006fe740)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0002be2f0, {0x903c320, 0xc001eb5ad0}, 0x1, 0xc000058d80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0002be2f0, 0x3b9aca00, 0x0, 0x1, 0xc000058d80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2910
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:143 +0x1ef

                                                
                                                
goroutine 3817 [select, 4 minutes]:
os/exec.(*Cmd).watchCtx(0xc00151b380, 0xc001fa7860)
	/usr/local/go/src/os/exec/exec.go:768 +0xb5
created by os/exec.(*Cmd).Start in goroutine 3814
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 715 [IO wait, 111 minutes]:
internal/poll.runtime_pollWait(0x5207b3c0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc000d94a00?, 0x3fe?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc000d94a00)
	/usr/local/go/src/internal/poll/fd_unix.go:611 +0x2ac
net.(*netFD).accept(0xc000d94a00)
	/usr/local/go/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc0006d0780)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc0006d0780)
	/usr/local/go/src/net/tcpsock.go:327 +0x30
net/http.(*Server).Serve(0xc0002325a0, {0x9055160, 0xc0006d0780})
	/usr/local/go/src/net/http/server.go:3260 +0x33e
net/http.(*Server).ListenAndServe(0xc0002325a0)
	/usr/local/go/src/net/http/server.go:3189 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xd?, 0xc00082f860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2213 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 712
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2212 +0x129

                                                
                                                
goroutine 2627 [chan receive, 16 minutes]:
testing.(*testContext).waitParallel(0xc0007fcb90)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000b61ba0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000b61ba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc000b61ba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc000b61ba0, 0xc001a98380)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2608
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2625 [chan receive, 6 minutes]:
testing.(*T).Run(0xc000b61860, {0x78d28d2?, 0x0?}, 0xc0007c6b80)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc000b61860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc000b61860, 0xc001a982c0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2608
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 162 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x9062690, 0xc000058d80}, 0xc000bf1f50, 0xc000bf1f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x9062690, 0xc000058d80}, 0x0?, 0xc000bf1f50, 0xc000bf1f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x9062690?, 0xc000058d80?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 156
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:145 +0x29a

                                                
                                                
goroutine 155 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x9058d40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 154
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 2674 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x9058d40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 2689
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 3638 [chan receive, 4 minutes]:
testing.(*T).Run(0xc001b62000, {0x78de4ec?, 0x60400000004?}, 0xc001b76800)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc001b62000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc001b62000, 0xc0007c6b80)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2625
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2098 [chan receive, 53 minutes]:
testing.(*T).Run(0xc001b96680, {0x78d127d?, 0x437eacacbfd?}, 0xc0015460f0)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc001b96680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd4
testing.tRunner(0xc001b96680, 0x902e098)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3052 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc001bfe410, 0x12)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xc000d4cd80?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x907c6e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/queue.go:282 +0x98
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001bfe440)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001a129d0, {0x903c320, 0xc001be1170}, 0x1, 0xc000058d80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001a129d0, 0x3b9aca00, 0x0, 0x1, 0xc000058d80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3031
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:143 +0x1ef

                                                
                                                
goroutine 3053 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x9062690, 0xc000058d80}, 0xc001474f50, 0xc000b57f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x9062690, 0xc000058d80}, 0x40?, 0xc001474f50, 0xc001474f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x9062690?, 0xc000058d80?}, 0x60f7876?, 0xc0015b0900?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc001474fd0?, 0x5cc0844?, 0xc001717440?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3031
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:145 +0x29a

                                                
                                                
goroutine 3248 [chan receive, 10 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001a985c0, 0xc000058d80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:150 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3243
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3170 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x9062690, 0xc000058d80}, 0xc000093f50, 0xc000288f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x9062690, 0xc000058d80}, 0x60?, 0xc000093f50, 0xc000093f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x9062690?, 0xc000058d80?}, 0xc00082e340?, 0x5c7a540?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc000093fd0?, 0x5cc0844?, 0xc001b1e2d0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3142
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:145 +0x29a

                                                
                                                
goroutine 2629 [chan receive, 16 minutes]:
testing.(*testContext).waitParallel(0xc0007fcb90)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc001b96820)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc001b96820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc001b96820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc001b96820, 0xc001a98400)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2608
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2626 [chan receive, 16 minutes]:
testing.(*testContext).waitParallel(0xc0007fcb90)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000b61a00)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000b61a00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc000b61a00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc000b61a00, 0xc001a98300)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2608
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2693 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc00197a790, 0x12)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xc000287d80?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x907c6e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/queue.go:282 +0x98
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00197a7c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0014469d0, {0x903c320, 0xc000bf2cf0}, 0x1, 0xc000058d80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0014469d0, 0x3b9aca00, 0x0, 0x1, 0xc000058d80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2675
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:143 +0x1ef

                                                
                                                
goroutine 3247 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x9058d40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3243
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 2164 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2163
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3729 [select, 6 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x9062690, 0xc000058d80}, 0xc001471750, 0xc001471798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x9062690, 0xc000058d80}, 0x2f?, 0xc001471750, 0xc001471798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x9062690?, 0xc000058d80?}, 0x74756f3d4e49425f?, 0x62756b696e696d2f?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x4200534f63616d5f?, 0x5349445f444c4955?, 0x4d414e5f59414c50?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3710
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:145 +0x29a

                                                
                                                
goroutine 2150 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x9058d40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 2095
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 2790 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc000915c90, 0x12)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xc000d4ad80?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x907c6e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/queue.go:282 +0x98
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000915d00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00209a0d0, {0x903c320, 0xc00158e180}, 0x1, 0xc000058d80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00209a0d0, 0x3b9aca00, 0x0, 0x1, 0xc000058d80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2802
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:143 +0x1ef

                                                
                                                
goroutine 1411 [select, 105 minutes]:
net/http.(*persistConn).readLoop(0xc0009170e0)
	/usr/local/go/src/net/http/transport.go:2261 +0xd3a
created by net/http.(*Transport).dialConn in goroutine 1396
	/usr/local/go/src/net/http/transport.go:1799 +0x152f

                                                
                                                
goroutine 2628 [chan receive, 6 minutes]:
testing.(*T).Run(0xc001b961a0, {0x78d28d2?, 0x0?}, 0xc001b76f00)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc001b961a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc001b961a0, 0xc001a983c0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2608
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2910 [chan receive, 12 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0006fe740, 0xc000058d80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:150 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2908
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2630 [chan receive, 16 minutes]:
testing.(*testContext).waitParallel(0xc0007fcb90)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc001b971e0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc001b971e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc001b971e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc001b971e0, 0xc001a98480)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2608
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2792 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2791
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2178 [chan receive, 6 minutes]:
testing.(*testContext).waitParallel(0xc0007fcb90)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1665 +0x5e9
testing.tRunner(0xc000b60000, 0xc0015460f0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 2098
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1323 [chan send, 107 minutes]:
os/exec.(*Cmd).watchCtx(0xc001ff4480, 0xc001fa6ae0)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1329
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 966 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x9058d40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 867
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 2695 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2694
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3031 [chan receive, 11 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001bfe440, 0xc000058d80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:150 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3026
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3030 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x9058d40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3026
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 2694 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x9062690, 0xc000058d80}, 0xc001475f50, 0xc001475f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x9062690, 0xc000058d80}, 0x40?, 0xc001475f50, 0xc001475f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x9062690?, 0xc000058d80?}, 0xc000b61380?, 0x5c7a540?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc001475fd0?, 0x5cc0844?, 0xc0019ecb40?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2675
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:145 +0x29a

                                                
                                                
goroutine 3372 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x9058d40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3371
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 3142 [chan receive, 10 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000915e00, 0xc000058d80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:150 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3165
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 1122 [chan send, 107 minutes]:
os/exec.(*Cmd).watchCtx(0xc001a3a780, 0xc001923620)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1121
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 3378 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x9062690, 0xc000058d80}, 0xc001571750, 0xc001571798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x9062690, 0xc000058d80}, 0x7?, 0xc001571750, 0xc001571798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x9062690?, 0xc000058d80?}, 0xc0000da820?, 0x5c7a540?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0015717d0?, 0x5cc0844?, 0xc001444090?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3373
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:145 +0x29a

                                                
                                                
goroutine 2162 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc000862ad0, 0x1c)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xc000286d80?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x907c6e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/queue.go:282 +0x98
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000862b00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00209ac60, {0x903c320, 0xc001460330}, 0x1, 0xc000058d80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00209ac60, 0x3b9aca00, 0x0, 0x1, 0xc000058d80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2151
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:143 +0x1ef

                                                
                                                
goroutine 3254 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3253
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 1412 [select, 105 minutes]:
net/http.(*persistConn).writeLoop(0xc0009170e0)
	/usr/local/go/src/net/http/transport.go:2458 +0xf0
created by net/http.(*Transport).dialConn in goroutine 1396
	/usr/local/go/src/net/http/transport.go:1800 +0x1585

                                                
                                                
goroutine 2894 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2893
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3515 [chan receive, 8 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00197b400, 0xc000058d80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:150 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3494
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3373 [chan receive, 8 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001a98500, 0xc000058d80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:150 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3371
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3704 [chan receive, 4 minutes]:
testing.(*T).Run(0xc00082f6c0, {0x78de4ec?, 0x60400000004?}, 0xc001f98180)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc00082f6c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc00082f6c0, 0xc001b76f00)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2628
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 935 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x9062690, 0xc000058d80}, 0xc001476f50, 0xc000d4ff98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x9062690, 0xc000058d80}, 0x40?, 0xc001476f50, 0xc001476f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x9062690?, 0xc000058d80?}, 0xc000b5c820?, 0x5c7a540?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc001476fd0?, 0x5cc0844?, 0xc001717140?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 967
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:145 +0x29a

                                                
                                                
goroutine 936 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 935
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 934 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc000862610, 0x2b)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xc0014e5d80?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x907c6e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/queue.go:282 +0x98
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000862640)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000719eb0, {0x903c320, 0xc000c40c90}, 0x1, 0xc000058d80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000719eb0, 0x3b9aca00, 0x0, 0x1, 0xc000058d80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 967
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:143 +0x1ef

                                                
                                                
goroutine 967 [chan receive, 107 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000862640, 0xc000058d80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:150 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 867
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3252 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc001a98590, 0x10)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xc000bf0d80?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x907c6e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/queue.go:282 +0x98
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001a985c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00049b1f0, {0x903c320, 0xc000c4be30}, 0x1, 0xc000058d80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00049b1f0, 0x3b9aca00, 0x0, 0x1, 0xc000058d80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3248
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:143 +0x1ef

                                                
                                                
goroutine 3730 [select, 6 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3729
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2791 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x9062690, 0xc000058d80}, 0xc000095750, 0xc0014e4f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x9062690, 0xc000058d80}, 0x0?, 0xc000095750, 0xc000095798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x9062690?, 0xc000058d80?}, 0xc00080fd40?, 0x5c7a540?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0000957d0?, 0x5cc0844?, 0xc000d485a0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2802
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:145 +0x29a

                                                
                                                
goroutine 3500 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3499
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3559 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x9062690, 0xc000058d80}, 0xc001572750, 0xc001572798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x9062690, 0xc000058d80}, 0x0?, 0xc001572750, 0xc001572798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x9062690?, 0xc000058d80?}, 0xc000b61601?, 0xc000058d80?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0015727d0?, 0x5cc0844?, 0xc000058d80?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3572
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:145 +0x29a

                                                
                                                
goroutine 2608 [chan receive, 16 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc000b61520, 0x902e240)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 2193
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2163 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x9062690, 0xc000058d80}, 0xc000096750, 0xc000970f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x9062690, 0xc000058d80}, 0x88?, 0xc000096750, 0xc000096798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x9062690?, 0xc000058d80?}, 0xc001b96340?, 0x5c7a540?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0000967d0?, 0x5cc0844?, 0x902e0c8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2151
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:145 +0x29a

                                                
                                                
goroutine 2893 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x9062690, 0xc000058d80}, 0xc001573750, 0xc001573798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x9062690, 0xc000058d80}, 0x9c?, 0xc001573750, 0xc001573798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x9062690?, 0xc000058d80?}, 0xc0016a0eb0?, 0xc0019ecc60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0015737d0?, 0x5cc0844?, 0xc001a3aa80?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2910
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:145 +0x29a

                                                
                                                
goroutine 1269 [chan send, 107 minutes]:
os/exec.(*Cmd).watchCtx(0xc001dd6180, 0xc001d51920)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1268
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 3171 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3170
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2802 [chan receive, 12 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000915d00, 0xc000058d80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:150 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2784
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3169 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc000915dd0, 0x10)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xc00028cd80?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x907c6e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/queue.go:282 +0x98
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000915e00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001446ad0, {0x903c320, 0xc001a648d0}, 0x1, 0xc000058d80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001446ad0, 0x3b9aca00, 0x0, 0x1, 0xc000058d80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3142
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:143 +0x1ef

                                                
                                                
goroutine 1311 [chan send, 105 minutes]:
os/exec.(*Cmd).watchCtx(0xc001dd7b00, 0xc002026780)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 854
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 3377 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc001a984d0, 0x10)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xc000979d80?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x907c6e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/queue.go:282 +0x98
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001a98500)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000d7cbf0, {0x903c320, 0xc001444390}, 0x1, 0xc000058d80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000d7cbf0, 0x3b9aca00, 0x0, 0x1, 0xc000058d80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3373
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:143 +0x1ef

                                                
                                                
goroutine 3769 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc00197b250, 0x0)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xc0019e3d80?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x907c6e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/queue.go:282 +0x98
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00197b280)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00209ac80, {0x903c320, 0xc000bf2a50}, 0x1, 0xc000058d80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00209ac80, 0x3b9aca00, 0x0, 0x1, 0xc000058d80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3788
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:143 +0x1ef

                                                
                                                
goroutine 3770 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x9062690, 0xc000058d80}, 0xc001577f50, 0xc001577f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x9062690, 0xc000058d80}, 0xe0?, 0xc001577f50, 0xc001577f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x9062690?, 0xc000058d80?}, 0xc001b63040?, 0x5c7a540?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc001577fd0?, 0x5cc0844?, 0xc001577fa8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3788
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:145 +0x29a

                                                
                                                
goroutine 2801 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x9058d40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 2784
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 3710 [chan receive, 6 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000d866c0, 0xc000058d80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:150 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3724
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3765 [syscall, 4 minutes]:
syscall.syscall6(0xc000bf3f80?, 0x1000000000010?, 0x10000000019?, 0x52157d68?, 0x90?, 0xb0875b8?, 0x90?)
	/usr/local/go/src/runtime/sys_darwin.go:45 +0x98
syscall.wait4(0xc000beab48?, 0x5b470c5?, 0x90?, 0x8f979c0?)
	/usr/local/go/src/syscall/zsyscall_darwin_amd64.go:44 +0x45
syscall.Wait4(0x5c77885?, 0xc000beab7c, 0x0?, 0x0?)
	/usr/local/go/src/syscall/syscall_bsd.go:144 +0x25
os.(*Process).wait(0xc001a1a1b0)
	/usr/local/go/src/os/exec_unix.go:43 +0x6d
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc0015b0480)
	/usr/local/go/src/os/exec/exec.go:901 +0x45
os/exec.(*Cmd).Run(0xc0015b0480)
	/usr/local/go/src/os/exec/exec.go:608 +0x2d
k8s.io/minikube/test/integration.Run(0xc001b97a00, 0xc0015b0480)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateSecondStart({0x90624d0, 0xc000790310}, 0xc001b97a00, {0xc001ed98d8, 0x11}, {0xc90e370019e7f58?, 0xc0019e7f60?}, {0x5c79c13?, 0x5bd1c6f?}, {0xc000964c00, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:256 +0xe5
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc001b97a00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc001b97a00, 0xc001f98180)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 3704
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2909 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x9058d40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 2908
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 3253 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x9062690, 0xc000058d80}, 0xc00147bf50, 0xc00147bf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x9062690, 0xc000058d80}, 0x10?, 0xc00147bf50, 0xc00147bf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x9062690?, 0xc000058d80?}, 0xc00082fba0?, 0x5c7a540?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc00147bfd0?, 0x5cc0844?, 0xc000bde7e0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3248
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:145 +0x29a

                                                
                                                
goroutine 3771 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3770
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3498 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc00197b3d0, 0xf)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xc0014e0d80?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x907c6e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/queue.go:282 +0x98
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00197b400)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00170e3d0, {0x903c320, 0xc000d83b30}, 0x1, 0xc000058d80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00170e3d0, 0x3b9aca00, 0x0, 0x1, 0xc000058d80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3515
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:143 +0x1ef

                                                
                                                
goroutine 3379 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3378
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3571 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x9058d40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3551
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 3558 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc001ebd5d0, 0xf)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xc000b5ad80?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x907c6e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/queue.go:282 +0x98
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001ebd600)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0002bf6b0, {0x903c320, 0xc000c40c60}, 0x1, 0xc000058d80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0002bf6b0, 0x3b9aca00, 0x0, 0x1, 0xc000058d80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3572
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:143 +0x1ef

                                                
                                                
goroutine 3572 [chan receive, 8 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001ebd600, 0xc000058d80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:150 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3551
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3514 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x9058d40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3494
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 3560 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3559
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3728 [sync.Cond.Wait, 6 minutes]:
sync.runtime_notifyListWait(0xc000d86690, 0x0)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xc001470d80?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x907c6e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/queue.go:282 +0x98
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000d866c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00209a080, {0x903c320, 0xc001a64e40}, 0x1, 0xc000058d80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00209a080, 0x3b9aca00, 0x0, 0x1, 0xc000058d80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3710
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:143 +0x1ef

                                                
                                                
goroutine 3768 [select, 4 minutes]:
os/exec.(*Cmd).watchCtx(0xc0015b0480, 0xc000059b00)
	/usr/local/go/src/os/exec/exec.go:768 +0xb5
created by os/exec.(*Cmd).Start in goroutine 3765
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 3709 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x9058d40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3724
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 3787 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x9058d40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3783
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 3767 [IO wait]:
internal/poll.runtime_pollWait(0x5207b1d0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc0007707e0?, 0xc0016c415a?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0007707e0, {0xc0016c415a, 0x1bea6, 0x1bea6})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0009842c0, {0xc0016c415a?, 0x52152bf8?, 0x1fe5e?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc000bf24e0, {0x903ace8, 0xc001db60e0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x903ae28, 0xc000bf24e0}, {0x903ace8, 0xc001db60e0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x903ae28, 0xc000bf24e0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc0019e16c0?, {0x903ae28?, 0xc000bf24e0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x903ae28, 0xc000bf24e0}, {0x903ada8, 0xc0009842c0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc00170c300?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3765
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                                
goroutine 3766 [IO wait]:
internal/poll.runtime_pollWait(0x5207aee8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc000770720?, 0xc000948305?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc000770720, {0xc000948305, 0x4fb, 0x4fb})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000984238, {0xc000948305?, 0x5cbe8c7?, 0x22a?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc000bf2480, {0x903ace8, 0xc001db60d0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x903ae28, 0xc000bf2480}, {0x903ace8, 0xc001db60d0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xa4fdba0?, {0x903ae28, 0xc000bf2480})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xf?, {0x903ae28?, 0xc000bf2480?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x903ae28, 0xc000bf2480}, {0x903ada8, 0xc000984238}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc001f98180?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3765
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                                
goroutine 3814 [syscall, 4 minutes]:
syscall.syscall6(0xc000d61f80?, 0x1000000000010?, 0x10000000019?, 0x51d39f48?, 0x90?, 0xb087108?, 0x90?)
	/usr/local/go/src/runtime/sys_darwin.go:45 +0x98
syscall.wait4(0xc000289b48?, 0x5b470c5?, 0x90?, 0x8f979c0?)
	/usr/local/go/src/syscall/zsyscall_darwin_amd64.go:44 +0x45
syscall.Wait4(0x5c77885?, 0xc000289b7c, 0x0?, 0x0?)
	/usr/local/go/src/syscall/syscall_bsd.go:144 +0x25
os.(*Process).wait(0xc0015d8090)
	/usr/local/go/src/os/exec_unix.go:43 +0x6d
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc00151b380)
	/usr/local/go/src/os/exec/exec.go:901 +0x45
os/exec.(*Cmd).Run(0xc00151b380)
	/usr/local/go/src/os/exec/exec.go:608 +0x2d
k8s.io/minikube/test/integration.Run(0xc001b62d00, 0xc00151b380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateSecondStart({0x90624d0, 0xc00054e310}, 0xc001b62d00, {0xc001bb8960, 0x16}, {0x28ee68a801473758?, 0xc001473760?}, {0x5c79c13?, 0x5bd1c6f?}, {0xc00170cf00, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:256 +0xe5
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc001b62d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc001b62d00, 0xc001b76800)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 3638
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3788 [chan receive, 4 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00197b280, 0xc000058d80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:150 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3783
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3815 [IO wait, 2 minutes]:
internal/poll.runtime_pollWait(0x5207acf8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc001c23500?, 0xc000b8acd7?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001c23500, {0xc000b8acd7, 0x329, 0x329})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc001db65d0, {0xc000b8acd7?, 0x5cbe9da?, 0x202?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc000d61950, {0x903ace8, 0xc0009851f0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x903ae28, 0xc000d61950}, {0x903ace8, 0xc0009851f0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xa4fdba0?, {0x903ae28, 0xc000d61950})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xf?, {0x903ae28?, 0xc000d61950?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x903ae28, 0xc000d61950}, {0x903ada8, 0xc001db65d0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc001b76800?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3814
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                                
goroutine 3816 [IO wait]:
internal/poll.runtime_pollWait(0x5207afe0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc001c235c0?, 0xc001b430ab?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001c235c0, {0xc001b430ab, 0x1cf55, 0x1cf55})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc001db65f0, {0xc001b430ab?, 0x51d38be8?, 0x1fe52?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc000d61980, {0x903ace8, 0xc000985200})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x903ae28, 0xc000d61980}, {0x903ace8, 0xc000985200}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x353536313d44495f?, {0x903ae28, 0xc000d61980})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x4e5f455341425f42?, {0x903ae28?, 0xc000d61980?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x903ae28, 0xc000d61980}, {0x903ada8, 0xc001db65f0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0x5245565245535f53?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3814
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                    

Test pass (176/215)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 20.38
4 TestDownloadOnly/v1.20.0/preload-exists 0
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.29
9 TestDownloadOnly/v1.20.0/DeleteAll 0.23
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.21
12 TestDownloadOnly/v1.31.0/json-events 6.78
13 TestDownloadOnly/v1.31.0/preload-exists 0
16 TestDownloadOnly/v1.31.0/kubectl 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.29
18 TestDownloadOnly/v1.31.0/DeleteAll 0.25
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.21
21 TestBinaryMirror 0.97
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.21
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.19
27 TestAddons/Setup 205.09
29 TestAddons/serial/Volcano 40.8
31 TestAddons/serial/GCPAuth/Namespaces 0.1
33 TestAddons/parallel/Registry 14.47
34 TestAddons/parallel/Ingress 20.15
35 TestAddons/parallel/InspektorGadget 11.49
36 TestAddons/parallel/MetricsServer 5.5
37 TestAddons/parallel/HelmTiller 9.68
39 TestAddons/parallel/CSI 50.89
40 TestAddons/parallel/Headlamp 17.42
41 TestAddons/parallel/CloudSpanner 5.38
42 TestAddons/parallel/LocalPath 53.38
43 TestAddons/parallel/NvidiaDevicePlugin 5.36
44 TestAddons/parallel/Yakd 10.46
45 TestAddons/StoppedEnableDisable 5.93
53 TestHyperKitDriverInstallOrUpdate 8.69
56 TestErrorSpam/setup 34.5
57 TestErrorSpam/start 1.74
58 TestErrorSpam/status 0.53
59 TestErrorSpam/pause 1.34
60 TestErrorSpam/unpause 1.44
61 TestErrorSpam/stop 153.86
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 74.65
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 37.22
68 TestFunctional/serial/KubeContext 0.04
69 TestFunctional/serial/KubectlGetPods 0.05
72 TestFunctional/serial/CacheCmd/cache/add_remote 3.19
73 TestFunctional/serial/CacheCmd/cache/add_local 1.36
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.08
75 TestFunctional/serial/CacheCmd/cache/list 0.08
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.17
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.09
78 TestFunctional/serial/CacheCmd/cache/delete 0.16
79 TestFunctional/serial/MinikubeKubectlCmd 1.22
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.56
81 TestFunctional/serial/ExtraConfig 41.11
82 TestFunctional/serial/ComponentHealth 0.05
83 TestFunctional/serial/LogsCmd 3.01
84 TestFunctional/serial/LogsFileCmd 2.67
85 TestFunctional/serial/InvalidService 4.64
87 TestFunctional/parallel/ConfigCmd 0.5
88 TestFunctional/parallel/DashboardCmd 11.24
89 TestFunctional/parallel/DryRun 0.97
90 TestFunctional/parallel/InternationalLanguage 0.61
91 TestFunctional/parallel/StatusCmd 0.5
95 TestFunctional/parallel/ServiceCmdConnect 8.61
96 TestFunctional/parallel/AddonsCmd 0.23
97 TestFunctional/parallel/PersistentVolumeClaim 26.36
99 TestFunctional/parallel/SSHCmd 0.28
100 TestFunctional/parallel/CpCmd 0.98
101 TestFunctional/parallel/MySQL 26.95
102 TestFunctional/parallel/FileSync 0.15
103 TestFunctional/parallel/CertSync 0.99
107 TestFunctional/parallel/NodeLabels 0.06
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.17
111 TestFunctional/parallel/License 0.63
112 TestFunctional/parallel/Version/short 0.1
113 TestFunctional/parallel/Version/components 0.4
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.16
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.16
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.16
117 TestFunctional/parallel/ImageCommands/ImageListYaml 0.15
118 TestFunctional/parallel/ImageCommands/ImageBuild 2.5
119 TestFunctional/parallel/ImageCommands/Setup 2.06
120 TestFunctional/parallel/DockerEnv/bash 0.57
121 TestFunctional/parallel/UpdateContextCmd/no_changes 0.19
122 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.19
123 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.19
124 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.86
125 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.6
126 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.49
127 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.24
128 TestFunctional/parallel/ImageCommands/ImageRemove 0.31
129 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.48
130 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.39
131 TestFunctional/parallel/ServiceCmd/DeployApp 22.14
132 TestFunctional/parallel/ServiceCmd/List 0.21
133 TestFunctional/parallel/ServiceCmd/JSONOutput 0.18
134 TestFunctional/parallel/ServiceCmd/HTTPS 0.33
136 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.38
137 TestFunctional/parallel/ServiceCmd/Format 0.25
138 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
140 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.17
141 TestFunctional/parallel/ServiceCmd/URL 0.25
142 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
143 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.02
144 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.04
145 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
146 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.02
147 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.13
148 TestFunctional/parallel/ProfileCmd/profile_not_create 0.28
149 TestFunctional/parallel/ProfileCmd/profile_list 0.26
150 TestFunctional/parallel/ProfileCmd/profile_json_output 0.26
151 TestFunctional/parallel/MountCmd/any-port 7.15
152 TestFunctional/parallel/MountCmd/specific-port 1.69
153 TestFunctional/parallel/MountCmd/VerifyCleanup 2.43
154 TestFunctional/delete_echo-server_images 0.04
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.02
164 TestMultiControlPlane/serial/NodeLabels 0.05
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.36
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.32
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.37
177 TestImageBuild/serial/Setup 37.53
178 TestImageBuild/serial/NormalBuild 1.86
179 TestImageBuild/serial/BuildWithBuildArg 0.83
180 TestImageBuild/serial/BuildWithDockerIgnore 0.6
181 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.64
185 TestJSONOutput/start/Command 81.38
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Command 0.48
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Command 0.45
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 8.34
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.58
213 TestMainNoArgs 0.08
214 TestMinikubeProfile 85.95
220 TestMultiNode/serial/FreshStart2Nodes 110.1
221 TestMultiNode/serial/DeployApp2Nodes 5.77
222 TestMultiNode/serial/PingHostFrom2Pods 0.89
223 TestMultiNode/serial/AddNode 48.55
224 TestMultiNode/serial/MultiNodeLabels 0.05
225 TestMultiNode/serial/ProfileList 0.18
226 TestMultiNode/serial/CopyFile 5.29
227 TestMultiNode/serial/StopNode 2.85
228 TestMultiNode/serial/StartAfterStop 36.69
229 TestMultiNode/serial/RestartKeepsNodes 180.32
230 TestMultiNode/serial/DeleteNode 3.33
231 TestMultiNode/serial/StopMultiNode 16.84
232 TestMultiNode/serial/RestartMultiNode 108.28
233 TestMultiNode/serial/ValidateNameConflict 43.19
237 TestPreload 135.06
240 TestSkaffold 113.09
243 TestRunningBinaryUpgrade 86.94
245 TestKubernetesUpgrade 1324.27
258 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 3.04
259 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 6.67
260 TestStoppedBinaryUpgrade/Setup 1
261 TestStoppedBinaryUpgrade/Upgrade 121.68
264 TestStoppedBinaryUpgrade/MinikubeLogs 2.99
273 TestNoKubernetes/serial/StartNoK8sWithVersion 0.44
274 TestNoKubernetes/serial/StartWithK8s 70.69
276 TestNoKubernetes/serial/StartWithStopK8s 17.47
277 TestNoKubernetes/serial/Start 19.5
278 TestNoKubernetes/serial/VerifyK8sNotRunning 0.16
279 TestNoKubernetes/serial/ProfileList 0.5
280 TestNoKubernetes/serial/Stop 2.37
281 TestNoKubernetes/serial/StartNoArgs 19.16
284 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.13
x
+
TestDownloadOnly/v1.20.0/json-events (20.38s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-883000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-883000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperkit : (20.377297844s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (20.38s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-883000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-883000: exit status 85 (291.857965ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-883000 | jenkins | v1.33.1 | 16 Aug 24 09:47 PDT |          |
	|         | -p download-only-883000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=hyperkit              |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 09:47:37
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 09:47:37.491194    1833 out.go:345] Setting OutFile to fd 1 ...
	I0816 09:47:37.491381    1833 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 09:47:37.491386    1833 out.go:358] Setting ErrFile to fd 2...
	I0816 09:47:37.491390    1833 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 09:47:37.491591    1833 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1276/.minikube/bin
	W0816 09:47:37.491690    1833 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19461-1276/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19461-1276/.minikube/config/config.json: no such file or directory
	I0816 09:47:37.493509    1833 out.go:352] Setting JSON to true
	I0816 09:47:37.517368    1833 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":1027,"bootTime":1723825830,"procs":436,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0816 09:47:37.517463    1833 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 09:47:37.539236    1833 out.go:97] [download-only-883000] minikube v1.33.1 on Darwin 14.6.1
	I0816 09:47:37.539424    1833 notify.go:220] Checking for updates...
	W0816 09:47:37.539445    1833 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/preloaded-tarball: no such file or directory
	I0816 09:47:37.560219    1833 out.go:169] MINIKUBE_LOCATION=19461
	I0816 09:47:37.581143    1833 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19461-1276/kubeconfig
	I0816 09:47:37.603141    1833 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0816 09:47:37.624251    1833 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 09:47:37.645293    1833 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1276/.minikube
	W0816 09:47:37.688194    1833 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0816 09:47:37.688557    1833 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 09:47:37.736255    1833 out.go:97] Using the hyperkit driver based on user configuration
	I0816 09:47:37.736312    1833 start.go:297] selected driver: hyperkit
	I0816 09:47:37.736330    1833 start.go:901] validating driver "hyperkit" against <nil>
	I0816 09:47:37.736573    1833 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 09:47:37.736936    1833 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19461-1276/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0816 09:47:38.141238    1833 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0816 09:47:38.146101    1833 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 09:47:38.146129    1833 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0816 09:47:38.146159    1833 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 09:47:38.150747    1833 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0816 09:47:38.151409    1833 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0816 09:47:38.151440    1833 cni.go:84] Creating CNI manager for ""
	I0816 09:47:38.151456    1833 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0816 09:47:38.151534    1833 start.go:340] cluster config:
	{Name:download-only-883000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-883000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 09:47:38.151759    1833 iso.go:125] acquiring lock: {Name:mkb430b43b5e7a8a683454482519837ed996c78d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 09:47:38.173062    1833 out.go:97] Downloading VM boot image ...
	I0816 09:47:38.173133    1833 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso.sha256 -> /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0816 09:47:48.699015    1833 out.go:97] Starting "download-only-883000" primary control-plane node in "download-only-883000" cluster
	I0816 09:47:48.699063    1833 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0816 09:47:48.765308    1833 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0816 09:47:48.765331    1833 cache.go:56] Caching tarball of preloaded images
	I0816 09:47:48.766345    1833 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0816 09:47:48.787591    1833 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0816 09:47:48.787600    1833 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0816 09:47:48.872613    1833 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0816 09:47:55.969032    1833 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0816 09:47:55.969448    1833 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0816 09:47:56.520308    1833 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0816 09:47:56.520560    1833 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/download-only-883000/config.json ...
	I0816 09:47:56.520585    1833 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/download-only-883000/config.json: {Name:mkc528dca702b7ccdca87569c6a88a14e9888168 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 09:47:56.520884    1833 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0816 09:47:56.521190    1833 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/darwin/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-883000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-883000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-883000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (6.78s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-178000 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-178000 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=hyperkit : (6.774885478s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (6.78s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
--- PASS: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-178000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-178000: exit status 85 (293.51713ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-883000 | jenkins | v1.33.1 | 16 Aug 24 09:47 PDT |                     |
	|         | -p download-only-883000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=hyperkit              |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 16 Aug 24 09:47 PDT | 16 Aug 24 09:47 PDT |
	| delete  | -p download-only-883000        | download-only-883000 | jenkins | v1.33.1 | 16 Aug 24 09:47 PDT | 16 Aug 24 09:47 PDT |
	| start   | -o=json --download-only        | download-only-178000 | jenkins | v1.33.1 | 16 Aug 24 09:47 PDT |                     |
	|         | -p download-only-178000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=hyperkit              |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 09:47:58
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 09:47:58.606058    1866 out.go:345] Setting OutFile to fd 1 ...
	I0816 09:47:58.606334    1866 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 09:47:58.606339    1866 out.go:358] Setting ErrFile to fd 2...
	I0816 09:47:58.606343    1866 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 09:47:58.606513    1866 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1276/.minikube/bin
	I0816 09:47:58.607914    1866 out.go:352] Setting JSON to true
	I0816 09:47:58.632801    1866 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":1048,"bootTime":1723825830,"procs":435,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0816 09:47:58.632896    1866 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 09:47:58.654037    1866 out.go:97] [download-only-178000] minikube v1.33.1 on Darwin 14.6.1
	I0816 09:47:58.654184    1866 notify.go:220] Checking for updates...
	I0816 09:47:58.675063    1866 out.go:169] MINIKUBE_LOCATION=19461
	I0816 09:47:58.696236    1866 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19461-1276/kubeconfig
	I0816 09:47:58.717072    1866 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0816 09:47:58.738422    1866 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 09:47:58.759093    1866 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1276/.minikube
	W0816 09:47:58.801180    1866 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0816 09:47:58.801631    1866 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 09:47:58.831895    1866 out.go:97] Using the hyperkit driver based on user configuration
	I0816 09:47:58.831937    1866 start.go:297] selected driver: hyperkit
	I0816 09:47:58.831949    1866 start.go:901] validating driver "hyperkit" against <nil>
	I0816 09:47:58.832142    1866 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 09:47:58.832358    1866 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19461-1276/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0816 09:47:58.841970    1866 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0816 09:47:58.846082    1866 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 09:47:58.846109    1866 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0816 09:47:58.846141    1866 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 09:47:58.849029    1866 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0816 09:47:58.849187    1866 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0816 09:47:58.849219    1866 cni.go:84] Creating CNI manager for ""
	I0816 09:47:58.849232    1866 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0816 09:47:58.849241    1866 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0816 09:47:58.849312    1866 start.go:340] cluster config:
	{Name:download-only-178000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:download-only-178000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 09:47:58.849408    1866 iso.go:125] acquiring lock: {Name:mkb430b43b5e7a8a683454482519837ed996c78d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 09:47:58.869968    1866 out.go:97] Starting "download-only-178000" primary control-plane node in "download-only-178000" cluster
	I0816 09:47:58.870011    1866 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 09:47:58.929074    1866 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0816 09:47:58.929124    1866 cache.go:56] Caching tarball of preloaded images
	I0816 09:47:58.929430    1866 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 09:47:58.950131    1866 out.go:97] Downloading Kubernetes v1.31.0 preload ...
	I0816 09:47:58.950142    1866 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 ...
	I0816 09:47:59.035322    1866 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4?checksum=md5:2dd98f97b896d7a4f012ee403b477cc8 -> /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0816 09:48:03.057695    1866 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 ...
	I0816 09:48:03.057891    1866 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 ...
	I0816 09:48:03.526787    1866 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 09:48:03.527030    1866 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/download-only-178000/config.json ...
	I0816 09:48:03.527056    1866 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/download-only-178000/config.json: {Name:mkfe3ff2712a0a2639c5198ab92b2f61925f3dd4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 09:48:03.527327    1866 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 09:48:03.527541    1866 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19461-1276/.minikube/cache/darwin/amd64/v1.31.0/kubectl
	
	
	* The control-plane node download-only-178000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-178000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.25s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.25s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-178000
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                    
x
+
TestBinaryMirror (0.97s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-772000 --alsologtostderr --binary-mirror http://127.0.0.1:49632 --driver=hyperkit 
helpers_test.go:175: Cleaning up "binary-mirror-772000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-772000
--- PASS: TestBinaryMirror (0.97s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.21s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-725000
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable dashboard -p addons-725000: exit status 85 (207.317538ms)

                                                
                                                
-- stdout --
	* Profile "addons-725000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-725000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.21s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.19s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-725000
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons disable dashboard -p addons-725000: exit status 85 (187.628838ms)

                                                
                                                
-- stdout --
	* Profile "addons-725000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-725000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.19s)

                                                
                                    
x
+
TestAddons/Setup (205.09s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-725000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=hyperkit  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-darwin-amd64 start -p addons-725000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=hyperkit  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m25.093577653s)
--- PASS: TestAddons/Setup (205.09s)

                                                
                                    
x
+
TestAddons/serial/Volcano (40.8s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:913: volcano-controller stabilized in 12.698678ms
addons_test.go:897: volcano-scheduler stabilized in 12.73064ms
addons_test.go:905: volcano-admission stabilized in 12.84433ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-24rhs" [2d7ef839-5719-4fba-a44a-1ea35547b663] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.004244898s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-sbfsp" [3880e939-616a-4599-a4fa-1ff441dbb0a0] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.004540579s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-t4fpv" [4c4d081a-fc10-448f-8152-28b4918d7a84] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.004746387s
addons_test.go:932: (dbg) Run:  kubectl --context addons-725000 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-725000 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-725000 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [0c7c990f-8b82-4cab-90ab-22c134d8eb8d] Pending
helpers_test.go:344: "test-job-nginx-0" [0c7c990f-8b82-4cab-90ab-22c134d8eb8d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [0c7c990f-8b82-4cab-90ab-22c134d8eb8d] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 15.003416243s
addons_test.go:968: (dbg) Run:  out/minikube-darwin-amd64 -p addons-725000 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-darwin-amd64 -p addons-725000 addons disable volcano --alsologtostderr -v=1: (10.505975316s)
--- PASS: TestAddons/serial/Volcano (40.80s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.1s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-725000 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-725000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.10s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.47s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 1.308755ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6fb4cdfc84-52kd6" [5df1080c-5437-4abe-a0b3-1f91e71a544c] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004598296s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-hnbnk" [67bab68e-4797-4929-98cc-9be3798605a5] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004059621s
addons_test.go:342: (dbg) Run:  kubectl --context addons-725000 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-725000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-725000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.829187577s)
addons_test.go:361: (dbg) Run:  out/minikube-darwin-amd64 -p addons-725000 ip
2024/08/16 09:52:44 [DEBUG] GET http://192.169.0.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 -p addons-725000 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.47s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.15s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-725000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-725000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-725000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [984d4b19-cdc6-4ad4-9115-1ab4f0328348] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [984d4b19-cdc6-4ad4-9115-1ab4f0328348] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.005511734s
addons_test.go:264: (dbg) Run:  out/minikube-darwin-amd64 -p addons-725000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-725000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-darwin-amd64 -p addons-725000 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.169.0.2
addons_test.go:308: (dbg) Run:  out/minikube-darwin-amd64 -p addons-725000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 -p addons-725000 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-darwin-amd64 -p addons-725000 addons disable ingress --alsologtostderr -v=1: (7.457727282s)
--- PASS: TestAddons/parallel/Ingress (20.15s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.49s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-rpklv" [38bb05a9-e211-43eb-b06a-5fe5f6adb8e9] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003458266s
addons_test.go:851: (dbg) Run:  out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-725000
addons_test.go:851: (dbg) Done: out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-725000: (5.490004698s)
--- PASS: TestAddons/parallel/InspektorGadget (11.49s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.5s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.150954ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-8988944d9-2nvtb" [051c6988-4ca7-483c-a4c1-4fcc50c5f879] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004109044s
addons_test.go:417: (dbg) Run:  kubectl --context addons-725000 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-darwin-amd64 -p addons-725000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.50s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (9.68s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 2.062476ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-b48cc5f79-xcths" [78b422b3-4552-4fbe-a61c-55fd16a99726] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.004613479s
addons_test.go:475: (dbg) Run:  kubectl --context addons-725000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-725000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.228656354s)
addons_test.go:492: (dbg) Run:  out/minikube-darwin-amd64 -p addons-725000 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (9.68s)

                                                
                                    
x
+
TestAddons/parallel/CSI (50.89s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 3.364432ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-725000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-725000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-725000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-725000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-725000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-725000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-725000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-725000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-725000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-725000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-725000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-725000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [c0528c42-db88-4068-ac9b-0109bb7d2638] Pending
helpers_test.go:344: "task-pv-pod" [c0528c42-db88-4068-ac9b-0109bb7d2638] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [c0528c42-db88-4068-ac9b-0109bb7d2638] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.004733719s
addons_test.go:590: (dbg) Run:  kubectl --context addons-725000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-725000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-725000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-725000 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-725000 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-725000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-725000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-725000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-725000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-725000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-725000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-725000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-725000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-725000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-725000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-725000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-725000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-725000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-725000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-725000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-725000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-725000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-725000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-725000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [f4ae2be8-d304-4c0d-a98c-7c33b947f90c] Pending
helpers_test.go:344: "task-pv-pod-restore" [f4ae2be8-d304-4c0d-a98c-7c33b947f90c] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [f4ae2be8-d304-4c0d-a98c-7c33b947f90c] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003471516s
addons_test.go:632: (dbg) Run:  kubectl --context addons-725000 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Done: kubectl --context addons-725000 delete pod task-pv-pod-restore: (1.186425498s)
addons_test.go:636: (dbg) Run:  kubectl --context addons-725000 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-725000 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-darwin-amd64 -p addons-725000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-darwin-amd64 -p addons-725000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.432888881s)
addons_test.go:648: (dbg) Run:  out/minikube-darwin-amd64 -p addons-725000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (50.89s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.42s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-725000 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-x2bx4" [ce670249-bdc3-4f85-be9d-9052aed11527] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-x2bx4" [ce670249-bdc3-4f85-be9d-9052aed11527] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.005037108s
addons_test.go:839: (dbg) Run:  out/minikube-darwin-amd64 -p addons-725000 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-darwin-amd64 -p addons-725000 addons disable headlamp --alsologtostderr -v=1: (5.471012057s)
--- PASS: TestAddons/parallel/Headlamp (17.42s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.38s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-c4bc9b5f8-zrzjp" [2ae9537e-4486-4639-9ad5-e861e48cbf22] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003783375s
addons_test.go:870: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-725000
--- PASS: TestAddons/parallel/CloudSpanner (5.38s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.38s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-725000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-725000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-725000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-725000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-725000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-725000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-725000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-725000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [a7c4580f-dbdc-4179-a6c0-e4c6f0d64840] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [a7c4580f-dbdc-4179-a6c0-e4c6f0d64840] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [a7c4580f-dbdc-4179-a6c0-e4c6f0d64840] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.004658638s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-725000 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-darwin-amd64 -p addons-725000 ssh "cat /opt/local-path-provisioner/pvc-e92d408d-493e-4922-87a5-140c0f0b67fc_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-725000 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-725000 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-darwin-amd64 -p addons-725000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-darwin-amd64 -p addons-725000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.745567883s)
--- PASS: TestAddons/parallel/LocalPath (53.38s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.36s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-s65fs" [4994fd38-c1b9-4c5a-b8b3-bc2748a26036] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003800594s
addons_test.go:1064: (dbg) Run:  out/minikube-darwin-amd64 addons disable nvidia-device-plugin -p addons-725000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.36s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.46s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-f7pzr" [4037b902-717a-4266-8c1d-66e3e03a1fae] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.007041673s
addons_test.go:1076: (dbg) Run:  out/minikube-darwin-amd64 -p addons-725000 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-darwin-amd64 -p addons-725000 addons disable yakd --alsologtostderr -v=1: (5.456238736s)
--- PASS: TestAddons/parallel/Yakd (10.46s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (5.93s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-725000
addons_test.go:174: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-725000: (5.390213396s)
addons_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-725000
addons_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-725000
addons_test.go:187: (dbg) Run:  out/minikube-darwin-amd64 addons disable gvisor -p addons-725000
--- PASS: TestAddons/StoppedEnableDisable (5.93s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (8.69s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (8.69s)

                                                
                                    
x
+
TestErrorSpam/setup (34.5s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-965000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-965000 --driver=hyperkit 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-965000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-965000 --driver=hyperkit : (34.498387678s)
error_spam_test.go:91: acceptable stderr: "! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.0."
--- PASS: TestErrorSpam/setup (34.50s)

                                                
                                    
x
+
TestErrorSpam/start (1.74s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-965000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-965000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-965000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-965000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-965000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-965000 start --dry-run
--- PASS: TestErrorSpam/start (1.74s)

                                                
                                    
x
+
TestErrorSpam/status (0.53s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-965000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-965000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-965000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-965000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-965000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-965000 status
--- PASS: TestErrorSpam/status (0.53s)

                                                
                                    
x
+
TestErrorSpam/pause (1.34s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-965000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-965000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-965000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-965000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-965000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-965000 pause
--- PASS: TestErrorSpam/pause (1.34s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.44s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-965000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-965000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-965000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-965000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-965000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-965000 unpause
--- PASS: TestErrorSpam/unpause (1.44s)

                                                
                                    
x
+
TestErrorSpam/stop (153.86s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-965000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-965000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-965000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-965000 stop: (3.401222558s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-965000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-965000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-965000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-965000 stop: (1m15.22956141s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-965000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-965000 stop
E0816 09:56:32.757476    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/addons-725000/client.crt: no such file or directory" logger="UnhandledError"
E0816 09:56:32.766659    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/addons-725000/client.crt: no such file or directory" logger="UnhandledError"
E0816 09:56:32.780187    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/addons-725000/client.crt: no such file or directory" logger="UnhandledError"
E0816 09:56:32.803336    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/addons-725000/client.crt: no such file or directory" logger="UnhandledError"
E0816 09:56:32.846074    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/addons-725000/client.crt: no such file or directory" logger="UnhandledError"
E0816 09:56:32.928722    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/addons-725000/client.crt: no such file or directory" logger="UnhandledError"
E0816 09:56:33.092199    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/addons-725000/client.crt: no such file or directory" logger="UnhandledError"
E0816 09:56:33.415563    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/addons-725000/client.crt: no such file or directory" logger="UnhandledError"
E0816 09:56:34.058210    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/addons-725000/client.crt: no such file or directory" logger="UnhandledError"
E0816 09:56:35.340781    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/addons-725000/client.crt: no such file or directory" logger="UnhandledError"
E0816 09:56:37.904094    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/addons-725000/client.crt: no such file or directory" logger="UnhandledError"
E0816 09:56:43.026657    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/addons-725000/client.crt: no such file or directory" logger="UnhandledError"
E0816 09:56:53.268212    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/addons-725000/client.crt: no such file or directory" logger="UnhandledError"
E0816 09:57:13.750174    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/addons-725000/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-amd64 -p nospam-965000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-965000 stop: (1m15.224270418s)
--- PASS: TestErrorSpam/stop (153.86s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /Users/jenkins/minikube-integration/19461-1276/.minikube/files/etc/test/nested/copy/1831/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (74.65s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-373000 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperkit 
E0816 09:57:54.711020    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/addons-725000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-darwin-amd64 start -p functional-373000 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperkit : (1m14.644482824s)
--- PASS: TestFunctional/serial/StartWithProxy (74.65s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (37.22s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-373000 --alsologtostderr -v=8
E0816 09:59:16.632138    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/addons-725000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-darwin-amd64 start -p functional-373000 --alsologtostderr -v=8: (37.216627892s)
functional_test.go:663: soft start took 37.21717699s for "functional-373000" cluster.
--- PASS: TestFunctional/serial/SoftStart (37.22s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-373000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-amd64 -p functional-373000 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-darwin-amd64 -p functional-373000 cache add registry.k8s.io/pause:3.1: (1.179895644s)
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-amd64 -p functional-373000 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-darwin-amd64 -p functional-373000 cache add registry.k8s.io/pause:3.3: (1.064551372s)
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-amd64 -p functional-373000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.36s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-373000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalserialCacheCmdcacheadd_local3279595217/001
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-amd64 -p functional-373000 cache add minikube-local-cache-test:functional-373000
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-amd64 -p functional-373000 cache delete minikube-local-cache-test:functional-373000
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-373000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.36s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-darwin-amd64 -p functional-373000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-amd64 -p functional-373000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-amd64 -p functional-373000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-373000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (142.652497ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-amd64 -p functional-373000 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-darwin-amd64 -p functional-373000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (1.22s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-darwin-amd64 -p functional-373000 kubectl -- --context functional-373000 get pods
functional_test.go:716: (dbg) Done: out/minikube-darwin-amd64 -p functional-373000 kubectl -- --context functional-373000 get pods: (1.215393381s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (1.22s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.56s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-373000 get pods
functional_test.go:741: (dbg) Done: out/kubectl --context functional-373000 get pods: (1.558452887s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.56s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (41.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-373000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-darwin-amd64 start -p functional-373000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.108573568s)
functional_test.go:761: restart took 41.10871244s for "functional-373000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (41.11s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-373000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (3.01s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-darwin-amd64 -p functional-373000 logs
functional_test.go:1236: (dbg) Done: out/minikube-darwin-amd64 -p functional-373000 logs: (3.00749576s)
--- PASS: TestFunctional/serial/LogsCmd (3.01s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (2.67s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-darwin-amd64 -p functional-373000 logs --file /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalserialLogsFileCmd3881159607/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-darwin-amd64 -p functional-373000 logs --file /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalserialLogsFileCmd3881159607/001/logs.txt: (2.665014593s)
--- PASS: TestFunctional/serial/LogsFileCmd (2.67s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.64s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-373000 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-darwin-amd64 service invalid-svc -p functional-373000
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-darwin-amd64 service invalid-svc -p functional-373000: exit status 115 (261.633746ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|--------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |           URL            |
	|-----------|-------------|-------------|--------------------------|
	| default   | invalid-svc |          80 | http://192.169.0.4:32680 |
	|-----------|-------------|-------------|--------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-373000 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-373000 delete -f testdata/invalidsvc.yaml: (1.248140592s)
--- PASS: TestFunctional/serial/InvalidService (4.64s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-373000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-373000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-373000 config get cpus: exit status 14 (70.202773ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-373000 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-373000 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-373000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-373000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-373000 config get cpus: exit status 14 (56.267502ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (11.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-373000 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-373000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 3639: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (11.24s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-373000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-373000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit : exit status 23 (494.377754ms)

                                                
                                                
-- stdout --
	* [functional-373000] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19461
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1276/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1276/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 10:01:29.967722    3577 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:01:29.967888    3577 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:01:29.967894    3577 out.go:358] Setting ErrFile to fd 2...
	I0816 10:01:29.967898    3577 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:01:29.968067    3577 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1276/.minikube/bin
	I0816 10:01:29.969412    3577 out.go:352] Setting JSON to false
	I0816 10:01:29.992015    3577 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":1859,"bootTime":1723825830,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0816 10:01:29.992116    3577 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 10:01:30.014117    3577 out.go:177] * [functional-373000] minikube v1.33.1 on Darwin 14.6.1
	I0816 10:01:30.056107    3577 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 10:01:30.056132    3577 notify.go:220] Checking for updates...
	I0816 10:01:30.098188    3577 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1276/kubeconfig
	I0816 10:01:30.119070    3577 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0816 10:01:30.142078    3577 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 10:01:30.162995    3577 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1276/.minikube
	I0816 10:01:30.184259    3577 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 10:01:30.205888    3577 config.go:182] Loaded profile config "functional-373000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:01:30.206566    3577 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:01:30.206647    3577 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:01:30.216427    3577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50782
	I0816 10:01:30.216805    3577 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:01:30.217215    3577 main.go:141] libmachine: Using API Version  1
	I0816 10:01:30.217227    3577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:01:30.217498    3577 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:01:30.217622    3577 main.go:141] libmachine: (functional-373000) Calling .DriverName
	I0816 10:01:30.217814    3577 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 10:01:30.218061    3577 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:01:30.218091    3577 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:01:30.226595    3577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50784
	I0816 10:01:30.226979    3577 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:01:30.227326    3577 main.go:141] libmachine: Using API Version  1
	I0816 10:01:30.227342    3577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:01:30.227587    3577 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:01:30.227703    3577 main.go:141] libmachine: (functional-373000) Calling .DriverName
	I0816 10:01:30.256262    3577 out.go:177] * Using the hyperkit driver based on existing profile
	I0816 10:01:30.297835    3577 start.go:297] selected driver: hyperkit
	I0816 10:01:30.297863    3577 start.go:901] validating driver "hyperkit" against &{Name:functional-373000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.31.0 ClusterName:functional-373000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 10:01:30.298056    3577 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 10:01:30.323029    3577 out.go:201] 
	W0816 10:01:30.344067    3577 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0816 10:01:30.364896    3577 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-373000 --dry-run --alsologtostderr -v=1 --driver=hyperkit 
--- PASS: TestFunctional/parallel/DryRun (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-373000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-373000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit : exit status 23 (610.999097ms)

                                                
                                                
-- stdout --
	* [functional-373000] minikube v1.33.1 sur Darwin 14.6.1
	  - MINIKUBE_LOCATION=19461
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1276/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1276/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote hyperkit basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 10:01:30.928374    3596 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:01:30.928530    3596 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:01:30.928536    3596 out.go:358] Setting ErrFile to fd 2...
	I0816 10:01:30.928539    3596 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:01:30.928737    3596 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1276/.minikube/bin
	I0816 10:01:30.930187    3596 out.go:352] Setting JSON to false
	I0816 10:01:30.953500    3596 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":1860,"bootTime":1723825830,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0816 10:01:30.953585    3596 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 10:01:30.974545    3596 out.go:177] * [functional-373000] minikube v1.33.1 sur Darwin 14.6.1
	I0816 10:01:31.016823    3596 notify.go:220] Checking for updates...
	I0816 10:01:31.037587    3596 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 10:01:31.079470    3596 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1276/kubeconfig
	I0816 10:01:31.121490    3596 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0816 10:01:31.163691    3596 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 10:01:31.205558    3596 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1276/.minikube
	I0816 10:01:31.268414    3596 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 10:01:31.291415    3596 config.go:182] Loaded profile config "functional-373000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:01:31.292086    3596 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:01:31.292168    3596 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:01:31.302519    3596 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50801
	I0816 10:01:31.302919    3596 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:01:31.303363    3596 main.go:141] libmachine: Using API Version  1
	I0816 10:01:31.303389    3596 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:01:31.303671    3596 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:01:31.303827    3596 main.go:141] libmachine: (functional-373000) Calling .DriverName
	I0816 10:01:31.304049    3596 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 10:01:31.304312    3596 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:01:31.304335    3596 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:01:31.312876    3596 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50804
	I0816 10:01:31.313265    3596 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:01:31.313651    3596 main.go:141] libmachine: Using API Version  1
	I0816 10:01:31.313679    3596 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:01:31.313915    3596 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:01:31.314050    3596 main.go:141] libmachine: (functional-373000) Calling .DriverName
	I0816 10:01:31.342492    3596 out.go:177] * Utilisation du pilote hyperkit basé sur le profil existant
	I0816 10:01:31.363457    3596 start.go:297] selected driver: hyperkit
	I0816 10:01:31.363475    3596 start.go:901] validating driver "hyperkit" against &{Name:functional-373000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.31.0 ClusterName:functional-373000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 10:01:31.363582    3596 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 10:01:31.406531    3596 out.go:201] 
	W0816 10:01:31.427586    3596 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0816 10:01:31.448522    3596 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-darwin-amd64 -p functional-373000 status
functional_test.go:860: (dbg) Run:  out/minikube-darwin-amd64 -p functional-373000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-darwin-amd64 -p functional-373000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-373000 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-373000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-ktvks" [5b0d3007-3686-447d-af45-6101b1c2d000] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-ktvks" [5b0d3007-3686-447d-af45-6101b1c2d000] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.003956196s
functional_test.go:1649: (dbg) Run:  out/minikube-darwin-amd64 -p functional-373000 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.169.0.4:31382
functional_test.go:1675: http://192.169.0.4:31382: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-ktvks

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.169.0.4:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.169.0.4:31382
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.61s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-darwin-amd64 -p functional-373000 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-darwin-amd64 -p functional-373000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [0e44b33a-b227-46e0-a67e-3b06b281f659] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.005063262s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-373000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-373000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-373000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-373000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [9132efa1-0620-40c3-9dc1-6fa191b9976e] Pending
helpers_test.go:344: "sp-pod" [9132efa1-0620-40c3-9dc1-6fa191b9976e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [9132efa1-0620-40c3-9dc1-6fa191b9976e] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.004665572s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-373000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-373000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-373000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [e386adb6-f8da-4acb-90d9-fa8e5bdc5c9f] Pending
helpers_test.go:344: "sp-pod" [e386adb6-f8da-4acb-90d9-fa8e5bdc5c9f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [e386adb6-f8da-4acb-90d9-fa8e5bdc5c9f] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003282007s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-373000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.36s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-darwin-amd64 -p functional-373000 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-darwin-amd64 -p functional-373000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-373000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-373000 ssh -n functional-373000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-373000 cp functional-373000:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelCpCmd4061105086/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-373000 ssh -n functional-373000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-373000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-373000 ssh -n functional-373000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (26.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-373000 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-jtjwm" [c4a47857-dc30-4e02-8ca7-e2bba392afb2] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-jtjwm" [c4a47857-dc30-4e02-8ca7-e2bba392afb2] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 24.003443783s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-373000 exec mysql-6cdb49bbb-jtjwm -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-373000 exec mysql-6cdb49bbb-jtjwm -- mysql -ppassword -e "show databases;": exit status 1 (124.543633ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-373000 exec mysql-6cdb49bbb-jtjwm -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-373000 exec mysql-6cdb49bbb-jtjwm -- mysql -ppassword -e "show databases;": exit status 1 (117.253685ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-373000 exec mysql-6cdb49bbb-jtjwm -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (26.95s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/1831/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-darwin-amd64 -p functional-373000 ssh "sudo cat /etc/test/nested/copy/1831/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/1831.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-amd64 -p functional-373000 ssh "sudo cat /etc/ssl/certs/1831.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/1831.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-amd64 -p functional-373000 ssh "sudo cat /usr/share/ca-certificates/1831.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-amd64 -p functional-373000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/18312.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-amd64 -p functional-373000 ssh "sudo cat /etc/ssl/certs/18312.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/18312.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-amd64 -p functional-373000 ssh "sudo cat /usr/share/ca-certificates/18312.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-amd64 -p functional-373000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-373000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-darwin-amd64 -p functional-373000 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-373000 ssh "sudo systemctl is-active crio": exit status 1 (166.30403ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-darwin-amd64 license
--- PASS: TestFunctional/parallel/License (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-darwin-amd64 -p functional-373000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-darwin-amd64 -p functional-373000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p functional-373000 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-373000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-373000
docker.io/kicbase/echo-server:functional-373000
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-373000 image ls --format short --alsologtostderr:
I0816 10:01:37.162449    3715 out.go:345] Setting OutFile to fd 1 ...
I0816 10:01:37.162779    3715 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 10:01:37.162784    3715 out.go:358] Setting ErrFile to fd 2...
I0816 10:01:37.162788    3715 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 10:01:37.162971    3715 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1276/.minikube/bin
I0816 10:01:37.163551    3715 config.go:182] Loaded profile config "functional-373000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0816 10:01:37.163651    3715 config.go:182] Loaded profile config "functional-373000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0816 10:01:37.163995    3715 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0816 10:01:37.164047    3715 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0816 10:01:37.172707    3715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50959
I0816 10:01:37.173140    3715 main.go:141] libmachine: () Calling .GetVersion
I0816 10:01:37.173567    3715 main.go:141] libmachine: Using API Version  1
I0816 10:01:37.173612    3715 main.go:141] libmachine: () Calling .SetConfigRaw
I0816 10:01:37.173859    3715 main.go:141] libmachine: () Calling .GetMachineName
I0816 10:01:37.173996    3715 main.go:141] libmachine: (functional-373000) Calling .GetState
I0816 10:01:37.174079    3715 main.go:141] libmachine: (functional-373000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0816 10:01:37.174158    3715 main.go:141] libmachine: (functional-373000) DBG | hyperkit pid from json: 2596
I0816 10:01:37.175521    3715 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0816 10:01:37.175546    3715 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0816 10:01:37.183984    3715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50961
I0816 10:01:37.184332    3715 main.go:141] libmachine: () Calling .GetVersion
I0816 10:01:37.184653    3715 main.go:141] libmachine: Using API Version  1
I0816 10:01:37.184662    3715 main.go:141] libmachine: () Calling .SetConfigRaw
I0816 10:01:37.184861    3715 main.go:141] libmachine: () Calling .GetMachineName
I0816 10:01:37.184969    3715 main.go:141] libmachine: (functional-373000) Calling .DriverName
I0816 10:01:37.185133    3715 ssh_runner.go:195] Run: systemctl --version
I0816 10:01:37.185152    3715 main.go:141] libmachine: (functional-373000) Calling .GetSSHHostname
I0816 10:01:37.185232    3715 main.go:141] libmachine: (functional-373000) Calling .GetSSHPort
I0816 10:01:37.185308    3715 main.go:141] libmachine: (functional-373000) Calling .GetSSHKeyPath
I0816 10:01:37.185397    3715 main.go:141] libmachine: (functional-373000) Calling .GetSSHUsername
I0816 10:01:37.185491    3715 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/functional-373000/id_rsa Username:docker}
I0816 10:01:37.215962    3715 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0816 10:01:37.240913    3715 main.go:141] libmachine: Making call to close driver server
I0816 10:01:37.240922    3715 main.go:141] libmachine: (functional-373000) Calling .Close
I0816 10:01:37.241095    3715 main.go:141] libmachine: Successfully made call to close driver server
I0816 10:01:37.241107    3715 main.go:141] libmachine: (functional-373000) DBG | Closing plugin on server side
I0816 10:01:37.241110    3715 main.go:141] libmachine: Making call to close connection to plugin binary
I0816 10:01:37.241118    3715 main.go:141] libmachine: Making call to close driver server
I0816 10:01:37.241124    3715 main.go:141] libmachine: (functional-373000) Calling .Close
I0816 10:01:37.241279    3715 main.go:141] libmachine: Successfully made call to close driver server
I0816 10:01:37.241288    3715 main.go:141] libmachine: Making call to close connection to plugin binary
I0816 10:01:37.241294    3715 main.go:141] libmachine: (functional-373000) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p functional-373000 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-373000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/pause                       | 3.10              | 873ed75102791 | 736kB  |
| registry.k8s.io/coredns/coredns             | v1.11.1           | cbb01a7bd410d | 59.8MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| docker.io/library/nginx                     | alpine            | 0f0eda053dc5c | 43.3MB |
| registry.k8s.io/kube-scheduler              | v1.31.0           | 1766f54c897f0 | 67.4MB |
| registry.k8s.io/kube-controller-manager     | v1.31.0           | 045733566833c | 88.4MB |
| registry.k8s.io/etcd                        | 3.5.15-0          | 2e96e5913fc06 | 148MB  |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| docker.io/library/minikube-local-cache-test | functional-373000 | 2ce2382d9b754 | 30B    |
| registry.k8s.io/kube-apiserver              | v1.31.0           | 604f5db92eaa8 | 94.2MB |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| docker.io/kicbase/echo-server               | functional-373000 | 9056ab77afb8e | 4.94MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| localhost/my-image                          | functional-373000 | ca34c50f7bd78 | 1.24MB |
| docker.io/library/nginx                     | latest            | 5ef79149e0ec8 | 188MB  |
| registry.k8s.io/kube-proxy                  | v1.31.0           | ad83b2ca7b09e | 91.5MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-373000 image ls --format table --alsologtostderr:
I0816 10:01:40.131250    3743 out.go:345] Setting OutFile to fd 1 ...
I0816 10:01:40.131534    3743 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 10:01:40.131540    3743 out.go:358] Setting ErrFile to fd 2...
I0816 10:01:40.131544    3743 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 10:01:40.131718    3743 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1276/.minikube/bin
I0816 10:01:40.132452    3743 config.go:182] Loaded profile config "functional-373000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0816 10:01:40.132550    3743 config.go:182] Loaded profile config "functional-373000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0816 10:01:40.132905    3743 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0816 10:01:40.132956    3743 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0816 10:01:40.141506    3743 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50992
I0816 10:01:40.141931    3743 main.go:141] libmachine: () Calling .GetVersion
I0816 10:01:40.142342    3743 main.go:141] libmachine: Using API Version  1
I0816 10:01:40.142377    3743 main.go:141] libmachine: () Calling .SetConfigRaw
I0816 10:01:40.142638    3743 main.go:141] libmachine: () Calling .GetMachineName
I0816 10:01:40.142767    3743 main.go:141] libmachine: (functional-373000) Calling .GetState
I0816 10:01:40.142883    3743 main.go:141] libmachine: (functional-373000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0816 10:01:40.142945    3743 main.go:141] libmachine: (functional-373000) DBG | hyperkit pid from json: 2596
I0816 10:01:40.144251    3743 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0816 10:01:40.144287    3743 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0816 10:01:40.152731    3743 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50994
I0816 10:01:40.153099    3743 main.go:141] libmachine: () Calling .GetVersion
I0816 10:01:40.153431    3743 main.go:141] libmachine: Using API Version  1
I0816 10:01:40.153442    3743 main.go:141] libmachine: () Calling .SetConfigRaw
I0816 10:01:40.153682    3743 main.go:141] libmachine: () Calling .GetMachineName
I0816 10:01:40.153796    3743 main.go:141] libmachine: (functional-373000) Calling .DriverName
I0816 10:01:40.153958    3743 ssh_runner.go:195] Run: systemctl --version
I0816 10:01:40.153977    3743 main.go:141] libmachine: (functional-373000) Calling .GetSSHHostname
I0816 10:01:40.154054    3743 main.go:141] libmachine: (functional-373000) Calling .GetSSHPort
I0816 10:01:40.154149    3743 main.go:141] libmachine: (functional-373000) Calling .GetSSHKeyPath
I0816 10:01:40.154225    3743 main.go:141] libmachine: (functional-373000) Calling .GetSSHUsername
I0816 10:01:40.154307    3743 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/functional-373000/id_rsa Username:docker}
I0816 10:01:40.182806    3743 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0816 10:01:40.211924    3743 main.go:141] libmachine: Making call to close driver server
I0816 10:01:40.211932    3743 main.go:141] libmachine: (functional-373000) Calling .Close
I0816 10:01:40.212097    3743 main.go:141] libmachine: Successfully made call to close driver server
I0816 10:01:40.212098    3743 main.go:141] libmachine: (functional-373000) DBG | Closing plugin on server side
I0816 10:01:40.212113    3743 main.go:141] libmachine: Making call to close connection to plugin binary
I0816 10:01:40.212123    3743 main.go:141] libmachine: Making call to close driver server
I0816 10:01:40.212127    3743 main.go:141] libmachine: (functional-373000) Calling .Close
I0816 10:01:40.212239    3743 main.go:141] libmachine: Successfully made call to close driver server
I0816 10:01:40.212248    3743 main.go:141] libmachine: Making call to close connection to plugin binary
2024/08/16 10:01:42 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p functional-373000 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-373000 image ls --format json --alsologtostderr:
[{"id":"0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"43300000"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"148000000"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"59800000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.0"],"size":"67400000"},{"id":"ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494"
,"repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.0"],"size":"91500000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"ca34c50f7bd78d2709c497124f15a91278063e130a6f62f370abe87aa0a0597d","repoDigests":[],"repoTags":["localhost/my-image:functional-373000"],"size":"1240000"},{"id":"604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.0"],"size":"94200000"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"736000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-373000"],"size":"4940000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"]
,"size":"742000"},{"id":"2ce2382d9b7544fc619e77d8c3a670efeeac57c2ee2402afce4c7d1c02f07f3d","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-373000"],"size":"30"},{"id":"5ef79149e0ec84a7a9f9284c3f91aa3c20608f8391f5445eabe92ef07dbda03c","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.0"],"size":"88400000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-373000 image ls --format json --alsologtostderr:
I0816 10:01:39.975109    3737 out.go:345] Setting OutFile to fd 1 ...
I0816 10:01:39.975300    3737 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 10:01:39.975306    3737 out.go:358] Setting ErrFile to fd 2...
I0816 10:01:39.975310    3737 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 10:01:39.975496    3737 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1276/.minikube/bin
I0816 10:01:39.976091    3737 config.go:182] Loaded profile config "functional-373000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0816 10:01:39.976189    3737 config.go:182] Loaded profile config "functional-373000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0816 10:01:39.976585    3737 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0816 10:01:39.976626    3737 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0816 10:01:39.985089    3737 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50987
I0816 10:01:39.985559    3737 main.go:141] libmachine: () Calling .GetVersion
I0816 10:01:39.986012    3737 main.go:141] libmachine: Using API Version  1
I0816 10:01:39.986021    3737 main.go:141] libmachine: () Calling .SetConfigRaw
I0816 10:01:39.986253    3737 main.go:141] libmachine: () Calling .GetMachineName
I0816 10:01:39.986373    3737 main.go:141] libmachine: (functional-373000) Calling .GetState
I0816 10:01:39.986461    3737 main.go:141] libmachine: (functional-373000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0816 10:01:39.986540    3737 main.go:141] libmachine: (functional-373000) DBG | hyperkit pid from json: 2596
I0816 10:01:39.987925    3737 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0816 10:01:39.987952    3737 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0816 10:01:39.996581    3737 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50989
I0816 10:01:39.996928    3737 main.go:141] libmachine: () Calling .GetVersion
I0816 10:01:39.997274    3737 main.go:141] libmachine: Using API Version  1
I0816 10:01:39.997287    3737 main.go:141] libmachine: () Calling .SetConfigRaw
I0816 10:01:39.997504    3737 main.go:141] libmachine: () Calling .GetMachineName
I0816 10:01:39.997618    3737 main.go:141] libmachine: (functional-373000) Calling .DriverName
I0816 10:01:39.997782    3737 ssh_runner.go:195] Run: systemctl --version
I0816 10:01:39.997801    3737 main.go:141] libmachine: (functional-373000) Calling .GetSSHHostname
I0816 10:01:39.997875    3737 main.go:141] libmachine: (functional-373000) Calling .GetSSHPort
I0816 10:01:39.997957    3737 main.go:141] libmachine: (functional-373000) Calling .GetSSHKeyPath
I0816 10:01:39.998038    3737 main.go:141] libmachine: (functional-373000) Calling .GetSSHUsername
I0816 10:01:39.998143    3737 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/functional-373000/id_rsa Username:docker}
I0816 10:01:40.027002    3737 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0816 10:01:40.051927    3737 main.go:141] libmachine: Making call to close driver server
I0816 10:01:40.051935    3737 main.go:141] libmachine: (functional-373000) Calling .Close
I0816 10:01:40.052092    3737 main.go:141] libmachine: Successfully made call to close driver server
I0816 10:01:40.052103    3737 main.go:141] libmachine: Making call to close connection to plugin binary
I0816 10:01:40.052110    3737 main.go:141] libmachine: Making call to close driver server
I0816 10:01:40.052152    3737 main.go:141] libmachine: (functional-373000) DBG | Closing plugin on server side
I0816 10:01:40.052156    3737 main.go:141] libmachine: (functional-373000) Calling .Close
I0816 10:01:40.052358    3737 main.go:141] libmachine: (functional-373000) DBG | Closing plugin on server side
I0816 10:01:40.052378    3737 main.go:141] libmachine: Successfully made call to close driver server
I0816 10:01:40.052399    3737 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p functional-373000 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-373000 image ls --format yaml --alsologtostderr:
- id: 0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43300000"
- id: 604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "94200000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 5ef79149e0ec84a7a9f9284c3f91aa3c20608f8391f5445eabe92ef07dbda03c
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: 1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "67400000"
- id: 045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "88400000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-373000
size: "4940000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "91500000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "59800000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 2ce2382d9b7544fc619e77d8c3a670efeeac57c2ee2402afce4c7d1c02f07f3d
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-373000
size: "30"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "148000000"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "736000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-373000 image ls --format yaml --alsologtostderr:
I0816 10:01:37.320889    3720 out.go:345] Setting OutFile to fd 1 ...
I0816 10:01:37.321165    3720 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 10:01:37.321171    3720 out.go:358] Setting ErrFile to fd 2...
I0816 10:01:37.321175    3720 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 10:01:37.321349    3720 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1276/.minikube/bin
I0816 10:01:37.321927    3720 config.go:182] Loaded profile config "functional-373000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0816 10:01:37.322018    3720 config.go:182] Loaded profile config "functional-373000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0816 10:01:37.322363    3720 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0816 10:01:37.322412    3720 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0816 10:01:37.330818    3720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50964
I0816 10:01:37.331245    3720 main.go:141] libmachine: () Calling .GetVersion
I0816 10:01:37.331718    3720 main.go:141] libmachine: Using API Version  1
I0816 10:01:37.331753    3720 main.go:141] libmachine: () Calling .SetConfigRaw
I0816 10:01:37.332000    3720 main.go:141] libmachine: () Calling .GetMachineName
I0816 10:01:37.332120    3720 main.go:141] libmachine: (functional-373000) Calling .GetState
I0816 10:01:37.332207    3720 main.go:141] libmachine: (functional-373000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0816 10:01:37.332284    3720 main.go:141] libmachine: (functional-373000) DBG | hyperkit pid from json: 2596
I0816 10:01:37.333559    3720 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0816 10:01:37.333582    3720 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0816 10:01:37.341985    3720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50966
I0816 10:01:37.342368    3720 main.go:141] libmachine: () Calling .GetVersion
I0816 10:01:37.342715    3720 main.go:141] libmachine: Using API Version  1
I0816 10:01:37.342728    3720 main.go:141] libmachine: () Calling .SetConfigRaw
I0816 10:01:37.342961    3720 main.go:141] libmachine: () Calling .GetMachineName
I0816 10:01:37.343068    3720 main.go:141] libmachine: (functional-373000) Calling .DriverName
I0816 10:01:37.343222    3720 ssh_runner.go:195] Run: systemctl --version
I0816 10:01:37.343242    3720 main.go:141] libmachine: (functional-373000) Calling .GetSSHHostname
I0816 10:01:37.343313    3720 main.go:141] libmachine: (functional-373000) Calling .GetSSHPort
I0816 10:01:37.343396    3720 main.go:141] libmachine: (functional-373000) Calling .GetSSHKeyPath
I0816 10:01:37.343478    3720 main.go:141] libmachine: (functional-373000) Calling .GetSSHUsername
I0816 10:01:37.343564    3720 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/functional-373000/id_rsa Username:docker}
I0816 10:01:37.373630    3720 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0816 10:01:37.394113    3720 main.go:141] libmachine: Making call to close driver server
I0816 10:01:37.394124    3720 main.go:141] libmachine: (functional-373000) Calling .Close
I0816 10:01:37.394286    3720 main.go:141] libmachine: Successfully made call to close driver server
I0816 10:01:37.394295    3720 main.go:141] libmachine: Making call to close connection to plugin binary
I0816 10:01:37.394301    3720 main.go:141] libmachine: Making call to close driver server
I0816 10:01:37.394308    3720 main.go:141] libmachine: (functional-373000) DBG | Closing plugin on server side
I0816 10:01:37.394313    3720 main.go:141] libmachine: (functional-373000) Calling .Close
I0816 10:01:37.394432    3720 main.go:141] libmachine: (functional-373000) DBG | Closing plugin on server side
I0816 10:01:37.394456    3720 main.go:141] libmachine: Successfully made call to close driver server
I0816 10:01:37.394469    3720 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-darwin-amd64 -p functional-373000 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-373000 ssh pgrep buildkitd: exit status 1 (122.292562ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-darwin-amd64 -p functional-373000 image build -t localhost/my-image:functional-373000 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-darwin-amd64 -p functional-373000 image build -t localhost/my-image:functional-373000 testdata/build --alsologtostderr: (2.218304806s)
functional_test.go:323: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-373000 image build -t localhost/my-image:functional-373000 testdata/build --alsologtostderr:
I0816 10:01:37.595848    3729 out.go:345] Setting OutFile to fd 1 ...
I0816 10:01:37.596109    3729 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 10:01:37.596114    3729 out.go:358] Setting ErrFile to fd 2...
I0816 10:01:37.596118    3729 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 10:01:37.596312    3729 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1276/.minikube/bin
I0816 10:01:37.596914    3729 config.go:182] Loaded profile config "functional-373000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0816 10:01:37.598106    3729 config.go:182] Loaded profile config "functional-373000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0816 10:01:37.598456    3729 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0816 10:01:37.598495    3729 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0816 10:01:37.606923    3729 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50976
I0816 10:01:37.607325    3729 main.go:141] libmachine: () Calling .GetVersion
I0816 10:01:37.607765    3729 main.go:141] libmachine: Using API Version  1
I0816 10:01:37.607777    3729 main.go:141] libmachine: () Calling .SetConfigRaw
I0816 10:01:37.608022    3729 main.go:141] libmachine: () Calling .GetMachineName
I0816 10:01:37.608150    3729 main.go:141] libmachine: (functional-373000) Calling .GetState
I0816 10:01:37.608241    3729 main.go:141] libmachine: (functional-373000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0816 10:01:37.608308    3729 main.go:141] libmachine: (functional-373000) DBG | hyperkit pid from json: 2596
I0816 10:01:37.609622    3729 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0816 10:01:37.609647    3729 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0816 10:01:37.618066    3729 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50978
I0816 10:01:37.618413    3729 main.go:141] libmachine: () Calling .GetVersion
I0816 10:01:37.618755    3729 main.go:141] libmachine: Using API Version  1
I0816 10:01:37.618772    3729 main.go:141] libmachine: () Calling .SetConfigRaw
I0816 10:01:37.618964    3729 main.go:141] libmachine: () Calling .GetMachineName
I0816 10:01:37.619057    3729 main.go:141] libmachine: (functional-373000) Calling .DriverName
I0816 10:01:37.619231    3729 ssh_runner.go:195] Run: systemctl --version
I0816 10:01:37.619253    3729 main.go:141] libmachine: (functional-373000) Calling .GetSSHHostname
I0816 10:01:37.619337    3729 main.go:141] libmachine: (functional-373000) Calling .GetSSHPort
I0816 10:01:37.619419    3729 main.go:141] libmachine: (functional-373000) Calling .GetSSHKeyPath
I0816 10:01:37.619506    3729 main.go:141] libmachine: (functional-373000) Calling .GetSSHUsername
I0816 10:01:37.619588    3729 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/functional-373000/id_rsa Username:docker}
I0816 10:01:37.650586    3729 build_images.go:161] Building image from path: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/build.168854547.tar
I0816 10:01:37.650669    3729 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0816 10:01:37.660040    3729 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.168854547.tar
I0816 10:01:37.666440    3729 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.168854547.tar: stat -c "%s %y" /var/lib/minikube/build/build.168854547.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.168854547.tar': No such file or directory
I0816 10:01:37.666472    3729 ssh_runner.go:362] scp /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/build.168854547.tar --> /var/lib/minikube/build/build.168854547.tar (3072 bytes)
I0816 10:01:37.694426    3729 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.168854547
I0816 10:01:37.707092    3729 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.168854547 -xf /var/lib/minikube/build/build.168854547.tar
I0816 10:01:37.718880    3729 docker.go:360] Building image: /var/lib/minikube/build/build.168854547
I0816 10:01:37.718955    3729 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-373000 /var/lib/minikube/build/build.168854547
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.0s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.2s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.3s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.3s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:ca34c50f7bd78d2709c497124f15a91278063e130a6f62f370abe87aa0a0597d done
#8 naming to localhost/my-image:functional-373000 done
#8 DONE 0.0s
I0816 10:01:39.711831    3729 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-373000 /var/lib/minikube/build/build.168854547: (1.992818959s)
I0816 10:01:39.711900    3729 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.168854547
I0816 10:01:39.720064    3729 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.168854547.tar
I0816 10:01:39.727791    3729 build_images.go:217] Built localhost/my-image:functional-373000 from /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/build.168854547.tar
I0816 10:01:39.727816    3729 build_images.go:133] succeeded building to: functional-373000
I0816 10:01:39.727820    3729 build_images.go:134] failed building to: 
I0816 10:01:39.727859    3729 main.go:141] libmachine: Making call to close driver server
I0816 10:01:39.727867    3729 main.go:141] libmachine: (functional-373000) Calling .Close
I0816 10:01:39.728022    3729 main.go:141] libmachine: Successfully made call to close driver server
I0816 10:01:39.728033    3729 main.go:141] libmachine: Making call to close connection to plugin binary
I0816 10:01:39.728041    3729 main.go:141] libmachine: Making call to close driver server
I0816 10:01:39.728043    3729 main.go:141] libmachine: (functional-373000) DBG | Closing plugin on server side
I0816 10:01:39.728049    3729 main.go:141] libmachine: (functional-373000) Calling .Close
I0816 10:01:39.728180    3729 main.go:141] libmachine: Successfully made call to close driver server
I0816 10:01:39.728189    3729 main.go:141] libmachine: Making call to close connection to plugin binary
I0816 10:01:39.728208    3729 main.go:141] libmachine: (functional-373000) DBG | Closing plugin on server side
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-373000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (2.024042127s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-373000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.06s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-373000 docker-env) && out/minikube-darwin-amd64 status -p functional-373000"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-373000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-amd64 -p functional-373000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-amd64 -p functional-373000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-amd64 -p functional-373000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-darwin-amd64 -p functional-373000 image load --daemon kicbase/echo-server:functional-373000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-373000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-darwin-amd64 -p functional-373000 image load --daemon kicbase/echo-server:functional-373000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-373000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-373000
functional_test.go:245: (dbg) Run:  out/minikube-darwin-amd64 -p functional-373000 image load --daemon kicbase/echo-server:functional-373000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-373000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-darwin-amd64 -p functional-373000 image save kicbase/echo-server:functional-373000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-darwin-amd64 -p functional-373000 image rm kicbase/echo-server:functional-373000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-373000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-darwin-amd64 -p functional-373000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-373000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-373000
functional_test.go:424: (dbg) Run:  out/minikube-darwin-amd64 -p functional-373000 image save --daemon kicbase/echo-server:functional-373000 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-373000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (22.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-373000 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-373000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-9fbs9" [5b0fb74a-9d89-4f6b-be99-a96251a07719] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-9fbs9" [5b0fb74a-9d89-4f6b-be99-a96251a07719] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 22.005646788s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (22.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-darwin-amd64 -p functional-373000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-darwin-amd64 -p functional-373000 service list -o json
functional_test.go:1494: Took "179.453503ms" to run "out/minikube-darwin-amd64 -p functional-373000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-darwin-amd64 -p functional-373000 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.169.0.4:31189
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-373000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-373000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-373000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-373000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 3408: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-darwin-amd64 -p functional-373000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-373000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-373000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [97e23a94-40f2-4a54-b287-f074222a04d6] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [97e23a94-40f2-4a54-b287-f074222a04d6] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.003376592s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.17s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-darwin-amd64 -p functional-373000 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.169.0.4:31189
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-373000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.97.157.130 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-amd64 -p functional-373000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1315: Took "175.942275ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1329: Took "79.085872ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1366: Took "178.730722ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1379: Took "79.702108ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-373000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port2379565442/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1723827684779841000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port2379565442/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1723827684779841000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port2379565442/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1723827684779841000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port2379565442/001/test-1723827684779841000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-373000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-373000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (116.74891ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-373000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-amd64 -p functional-373000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 16 17:01 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 16 17:01 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 16 17:01 test-1723827684779841000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 -p functional-373000 ssh cat /mount-9p/test-1723827684779841000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-373000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [c75c5fe4-6374-4472-bfe5-eb44a3ea271f] Pending
helpers_test.go:344: "busybox-mount" [c75c5fe4-6374-4472-bfe5-eb44a3ea271f] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [c75c5fe4-6374-4472-bfe5-eb44a3ea271f] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [c75c5fe4-6374-4472-bfe5-eb44a3ea271f] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.005042751s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-373000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-373000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-373000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-amd64 -p functional-373000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-373000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port2379565442/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.15s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-373000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdspecific-port4084863197/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-373000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-373000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (181.789915ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-373000 ssh "findmnt -T /mount-9p | grep 9p"
E0816 10:01:32.762538    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/addons-725000/client.crt: no such file or directory" logger="UnhandledError"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-373000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-373000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdspecific-port4084863197/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 -p functional-373000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-373000 ssh "sudo umount -f /mount-9p": exit status 1 (125.041396ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-amd64 -p functional-373000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-373000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdspecific-port4084863197/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-373000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3825892010/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-373000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3825892010/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-373000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3825892010/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-373000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-373000 ssh "findmnt -T" /mount1: exit status 1 (154.70769ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-373000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-373000 ssh "findmnt -T" /mount1: exit status 1 (215.890074ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-373000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-373000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-373000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-amd64 mount -p functional-373000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-373000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3825892010/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-373000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3825892010/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-373000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3825892010/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.43s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-373000
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-373000
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-373000
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-286000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.37s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (37.53s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -p image-542000 --driver=hyperkit 
image_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -p image-542000 --driver=hyperkit : (37.526399046s)
--- PASS: TestImageBuild/serial/Setup (37.53s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.86s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-542000
image_test.go:78: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-542000: (1.855268951s)
--- PASS: TestImageBuild/serial/NormalBuild (1.86s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.83s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-542000
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.83s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.6s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-542000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.60s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.64s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-542000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.64s)

                                                
                                    
x
+
TestJSONOutput/start/Command (81.38s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-213000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperkit 
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-213000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperkit : (1m21.376358902s)
--- PASS: TestJSONOutput/start/Command (81.38s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.48s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-213000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.48s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.45s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-213000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.45s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (8.34s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-213000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-213000 --output=json --user=testUser: (8.344602697s)
--- PASS: TestJSONOutput/stop/Command (8.34s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.58s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-813000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-813000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (358.608248ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"5d458c18-b892-465d-be10-89f71e5b234e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-813000] minikube v1.33.1 on Darwin 14.6.1","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d6cdce49-014a-42d5-80fc-981e73c8f5a9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19461"}}
	{"specversion":"1.0","id":"a1e181e7-11b6-4c42-bea6-9692986bd1e2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19461-1276/kubeconfig"}}
	{"specversion":"1.0","id":"6289dfa7-b6b6-44dc-af43-37af9e75d2fb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"f8eadaca-de8d-48f4-afc5-6e4a5c5aed0c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e1227059-bfcf-4080-acc3-006139d832ce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1276/.minikube"}}
	{"specversion":"1.0","id":"bf58936b-3185-4bf6-bf8e-ac620418e0b9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"0c232c06-ffb2-4198-84ea-ad4dec9194de","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-813000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-813000
--- PASS: TestErrorJSONOutput (0.58s)

                                                
                                    
x
+
TestMainNoArgs (0.08s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.08s)

                                                
                                    
x
+
TestMinikubeProfile (85.95s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-791000 --driver=hyperkit 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-791000 --driver=hyperkit : (37.398359622s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-793000 --driver=hyperkit 
E0816 10:35:35.618440    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/functional-373000/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-793000 --driver=hyperkit : (37.243168632s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-791000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-793000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-793000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-793000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-793000: (5.259678393s)
helpers_test.go:175: Cleaning up "first-791000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-791000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-791000: (5.270493468s)
--- PASS: TestMinikubeProfile (85.95s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (110.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-685000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit 
E0816 10:38:38.686606    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/functional-373000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-685000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit : (1m49.860408566s)
multinode_test.go:102: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-685000 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (110.10s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-685000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-685000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-685000 -- rollout status deployment/busybox: (4.069797158s)
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-685000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-685000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-685000 -- exec busybox-7dff88458-6xplk -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-685000 -- exec busybox-7dff88458-wwrjx -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-685000 -- exec busybox-7dff88458-6xplk -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-685000 -- exec busybox-7dff88458-wwrjx -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-685000 -- exec busybox-7dff88458-6xplk -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-685000 -- exec busybox-7dff88458-wwrjx -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.77s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-685000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-685000 -- exec busybox-7dff88458-6xplk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-685000 -- exec busybox-7dff88458-6xplk -- sh -c "ping -c 1 192.169.0.1"
multinode_test.go:572: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-685000 -- exec busybox-7dff88458-wwrjx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-685000 -- exec busybox-7dff88458-wwrjx -- sh -c "ping -c 1 192.169.0.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.89s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (48.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-685000 -v 3 --alsologtostderr
E0816 10:40:35.607971    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/functional-373000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-darwin-amd64 node add -p multinode-685000 -v 3 --alsologtostderr: (48.22319063s)
multinode_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-685000 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (48.55s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-685000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.05s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.18s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (5.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-685000 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-685000 cp testdata/cp-test.txt multinode-685000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-685000 ssh -n multinode-685000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-685000 cp multinode-685000:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile3486017263/001/cp-test_multinode-685000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-685000 ssh -n multinode-685000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-685000 cp multinode-685000:/home/docker/cp-test.txt multinode-685000-m02:/home/docker/cp-test_multinode-685000_multinode-685000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-685000 ssh -n multinode-685000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-685000 ssh -n multinode-685000-m02 "sudo cat /home/docker/cp-test_multinode-685000_multinode-685000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-685000 cp multinode-685000:/home/docker/cp-test.txt multinode-685000-m03:/home/docker/cp-test_multinode-685000_multinode-685000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-685000 ssh -n multinode-685000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-685000 ssh -n multinode-685000-m03 "sudo cat /home/docker/cp-test_multinode-685000_multinode-685000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-685000 cp testdata/cp-test.txt multinode-685000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-685000 ssh -n multinode-685000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-685000 cp multinode-685000-m02:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile3486017263/001/cp-test_multinode-685000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-685000 ssh -n multinode-685000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-685000 cp multinode-685000-m02:/home/docker/cp-test.txt multinode-685000:/home/docker/cp-test_multinode-685000-m02_multinode-685000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-685000 ssh -n multinode-685000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-685000 ssh -n multinode-685000 "sudo cat /home/docker/cp-test_multinode-685000-m02_multinode-685000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-685000 cp multinode-685000-m02:/home/docker/cp-test.txt multinode-685000-m03:/home/docker/cp-test_multinode-685000-m02_multinode-685000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-685000 ssh -n multinode-685000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-685000 ssh -n multinode-685000-m03 "sudo cat /home/docker/cp-test_multinode-685000-m02_multinode-685000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-685000 cp testdata/cp-test.txt multinode-685000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-685000 ssh -n multinode-685000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-685000 cp multinode-685000-m03:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile3486017263/001/cp-test_multinode-685000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-685000 ssh -n multinode-685000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-685000 cp multinode-685000-m03:/home/docker/cp-test.txt multinode-685000:/home/docker/cp-test_multinode-685000-m03_multinode-685000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-685000 ssh -n multinode-685000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-685000 ssh -n multinode-685000 "sudo cat /home/docker/cp-test_multinode-685000-m03_multinode-685000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-685000 cp multinode-685000-m03:/home/docker/cp-test.txt multinode-685000-m02:/home/docker/cp-test_multinode-685000-m03_multinode-685000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-685000 ssh -n multinode-685000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-685000 ssh -n multinode-685000-m02 "sudo cat /home/docker/cp-test_multinode-685000-m03_multinode-685000-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (5.29s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-685000 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-darwin-amd64 -p multinode-685000 node stop m03: (2.339616629s)
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-685000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-685000 status: exit status 7 (252.559969ms)

                                                
                                                
-- stdout --
	multinode-685000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-685000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-685000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-685000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-685000 status --alsologtostderr: exit status 7 (254.346789ms)

                                                
                                                
-- stdout --
	multinode-685000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-685000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-685000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 10:41:08.164382    6838 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:41:08.164659    6838 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:41:08.164664    6838 out.go:358] Setting ErrFile to fd 2...
	I0816 10:41:08.164668    6838 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:41:08.164847    6838 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1276/.minikube/bin
	I0816 10:41:08.165022    6838 out.go:352] Setting JSON to false
	I0816 10:41:08.165041    6838 mustload.go:65] Loading cluster: multinode-685000
	I0816 10:41:08.165084    6838 notify.go:220] Checking for updates...
	I0816 10:41:08.165351    6838 config.go:182] Loaded profile config "multinode-685000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:41:08.165366    6838 status.go:255] checking status of multinode-685000 ...
	I0816 10:41:08.165717    6838 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:41:08.165779    6838 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:41:08.174794    6838 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53152
	I0816 10:41:08.175194    6838 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:41:08.175648    6838 main.go:141] libmachine: Using API Version  1
	I0816 10:41:08.175658    6838 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:41:08.175875    6838 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:41:08.175990    6838 main.go:141] libmachine: (multinode-685000) Calling .GetState
	I0816 10:41:08.176083    6838 main.go:141] libmachine: (multinode-685000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:41:08.176186    6838 main.go:141] libmachine: (multinode-685000) DBG | hyperkit pid from json: 6490
	I0816 10:41:08.177313    6838 status.go:330] multinode-685000 host status = "Running" (err=<nil>)
	I0816 10:41:08.177333    6838 host.go:66] Checking if "multinode-685000" exists ...
	I0816 10:41:08.177585    6838 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:41:08.177611    6838 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:41:08.185974    6838 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53154
	I0816 10:41:08.186318    6838 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:41:08.186680    6838 main.go:141] libmachine: Using API Version  1
	I0816 10:41:08.186696    6838 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:41:08.186903    6838 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:41:08.187096    6838 main.go:141] libmachine: (multinode-685000) Calling .GetIP
	I0816 10:41:08.187208    6838 host.go:66] Checking if "multinode-685000" exists ...
	I0816 10:41:08.187454    6838 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:41:08.187478    6838 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:41:08.199447    6838 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53156
	I0816 10:41:08.199834    6838 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:41:08.200162    6838 main.go:141] libmachine: Using API Version  1
	I0816 10:41:08.200171    6838 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:41:08.200373    6838 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:41:08.200475    6838 main.go:141] libmachine: (multinode-685000) Calling .DriverName
	I0816 10:41:08.200604    6838 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 10:41:08.200625    6838 main.go:141] libmachine: (multinode-685000) Calling .GetSSHHostname
	I0816 10:41:08.200705    6838 main.go:141] libmachine: (multinode-685000) Calling .GetSSHPort
	I0816 10:41:08.200810    6838 main.go:141] libmachine: (multinode-685000) Calling .GetSSHKeyPath
	I0816 10:41:08.200894    6838 main.go:141] libmachine: (multinode-685000) Calling .GetSSHUsername
	I0816 10:41:08.200974    6838 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/multinode-685000/id_rsa Username:docker}
	I0816 10:41:08.232030    6838 ssh_runner.go:195] Run: systemctl --version
	I0816 10:41:08.236395    6838 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 10:41:08.248327    6838 kubeconfig.go:125] found "multinode-685000" server: "https://192.169.0.13:8443"
	I0816 10:41:08.248348    6838 api_server.go:166] Checking apiserver status ...
	I0816 10:41:08.248384    6838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 10:41:08.260522    6838 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1982/cgroup
	W0816 10:41:08.268816    6838 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1982/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0816 10:41:08.268875    6838 ssh_runner.go:195] Run: ls
	I0816 10:41:08.273363    6838 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0816 10:41:08.276373    6838 api_server.go:279] https://192.169.0.13:8443/healthz returned 200:
	ok
	I0816 10:41:08.276384    6838 status.go:422] multinode-685000 apiserver status = Running (err=<nil>)
	I0816 10:41:08.276393    6838 status.go:257] multinode-685000 status: &{Name:multinode-685000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 10:41:08.276404    6838 status.go:255] checking status of multinode-685000-m02 ...
	I0816 10:41:08.276650    6838 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:41:08.276671    6838 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:41:08.285229    6838 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53160
	I0816 10:41:08.285626    6838 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:41:08.285945    6838 main.go:141] libmachine: Using API Version  1
	I0816 10:41:08.285954    6838 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:41:08.286181    6838 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:41:08.286291    6838 main.go:141] libmachine: (multinode-685000-m02) Calling .GetState
	I0816 10:41:08.286383    6838 main.go:141] libmachine: (multinode-685000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:41:08.286457    6838 main.go:141] libmachine: (multinode-685000-m02) DBG | hyperkit pid from json: 6529
	I0816 10:41:08.287645    6838 status.go:330] multinode-685000-m02 host status = "Running" (err=<nil>)
	I0816 10:41:08.287654    6838 host.go:66] Checking if "multinode-685000-m02" exists ...
	I0816 10:41:08.287896    6838 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:41:08.287919    6838 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:41:08.296371    6838 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53162
	I0816 10:41:08.296713    6838 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:41:08.297007    6838 main.go:141] libmachine: Using API Version  1
	I0816 10:41:08.297016    6838 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:41:08.297250    6838 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:41:08.297380    6838 main.go:141] libmachine: (multinode-685000-m02) Calling .GetIP
	I0816 10:41:08.297461    6838 host.go:66] Checking if "multinode-685000-m02" exists ...
	I0816 10:41:08.297706    6838 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:41:08.297731    6838 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:41:08.306164    6838 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53164
	I0816 10:41:08.306537    6838 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:41:08.306846    6838 main.go:141] libmachine: Using API Version  1
	I0816 10:41:08.306856    6838 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:41:08.307076    6838 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:41:08.307181    6838 main.go:141] libmachine: (multinode-685000-m02) Calling .DriverName
	I0816 10:41:08.307308    6838 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 10:41:08.307320    6838 main.go:141] libmachine: (multinode-685000-m02) Calling .GetSSHHostname
	I0816 10:41:08.307398    6838 main.go:141] libmachine: (multinode-685000-m02) Calling .GetSSHPort
	I0816 10:41:08.307483    6838 main.go:141] libmachine: (multinode-685000-m02) Calling .GetSSHKeyPath
	I0816 10:41:08.307569    6838 main.go:141] libmachine: (multinode-685000-m02) Calling .GetSSHUsername
	I0816 10:41:08.307653    6838 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1276/.minikube/machines/multinode-685000-m02/id_rsa Username:docker}
	I0816 10:41:08.340566    6838 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 10:41:08.350893    6838 status.go:257] multinode-685000-m02 status: &{Name:multinode-685000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0816 10:41:08.350917    6838 status.go:255] checking status of multinode-685000-m03 ...
	I0816 10:41:08.351197    6838 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:41:08.351223    6838 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:41:08.359974    6838 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53167
	I0816 10:41:08.360340    6838 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:41:08.360708    6838 main.go:141] libmachine: Using API Version  1
	I0816 10:41:08.360733    6838 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:41:08.360961    6838 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:41:08.361091    6838 main.go:141] libmachine: (multinode-685000-m03) Calling .GetState
	I0816 10:41:08.361176    6838 main.go:141] libmachine: (multinode-685000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:41:08.361252    6838 main.go:141] libmachine: (multinode-685000-m03) DBG | hyperkit pid from json: 6610
	I0816 10:41:08.362418    6838 main.go:141] libmachine: (multinode-685000-m03) DBG | hyperkit pid 6610 missing from process table
	I0816 10:41:08.362445    6838 status.go:330] multinode-685000-m03 host status = "Stopped" (err=<nil>)
	I0816 10:41:08.362450    6838 status.go:343] host is not running, skipping remaining checks
	I0816 10:41:08.362456    6838 status.go:257] multinode-685000-m03 status: &{Name:multinode-685000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.85s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (36.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-685000 node start m03 -v=7 --alsologtostderr
E0816 10:41:32.695271    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/addons-725000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:282: (dbg) Done: out/minikube-darwin-amd64 -p multinode-685000 node start m03 -v=7 --alsologtostderr: (36.328525866s)
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-685000 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (36.69s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (180.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-685000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-685000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-685000: (18.848247259s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-685000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-685000 --wait=true -v=8 --alsologtostderr: (2m41.360771035s)
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-685000
--- PASS: TestMultiNode/serial/RestartKeepsNodes (180.32s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (3.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-685000 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-darwin-amd64 -p multinode-685000 node delete m03: (2.988451077s)
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-685000 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (3.33s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (16.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-685000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-amd64 -p multinode-685000 stop: (16.683762263s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-685000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-685000 status: exit status 7 (80.285287ms)

                                                
                                                
-- stdout --
	multinode-685000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-685000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-685000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-685000 status --alsologtostderr: exit status 7 (78.445976ms)

                                                
                                                
-- stdout --
	multinode-685000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-685000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 10:45:05.514836    7002 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:45:05.515122    7002 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:45:05.515128    7002 out.go:358] Setting ErrFile to fd 2...
	I0816 10:45:05.515131    7002 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:45:05.515296    7002 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1276/.minikube/bin
	I0816 10:45:05.515491    7002 out.go:352] Setting JSON to false
	I0816 10:45:05.515512    7002 mustload.go:65] Loading cluster: multinode-685000
	I0816 10:45:05.515555    7002 notify.go:220] Checking for updates...
	I0816 10:45:05.515801    7002 config.go:182] Loaded profile config "multinode-685000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:45:05.515820    7002 status.go:255] checking status of multinode-685000 ...
	I0816 10:45:05.516196    7002 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:45:05.516260    7002 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:45:05.525051    7002 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53398
	I0816 10:45:05.525422    7002 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:45:05.525815    7002 main.go:141] libmachine: Using API Version  1
	I0816 10:45:05.525824    7002 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:45:05.526051    7002 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:45:05.526168    7002 main.go:141] libmachine: (multinode-685000) Calling .GetState
	I0816 10:45:05.526264    7002 main.go:141] libmachine: (multinode-685000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:45:05.526333    7002 main.go:141] libmachine: (multinode-685000) DBG | hyperkit pid from json: 6913
	I0816 10:45:05.527243    7002 main.go:141] libmachine: (multinode-685000) DBG | hyperkit pid 6913 missing from process table
	I0816 10:45:05.527272    7002 status.go:330] multinode-685000 host status = "Stopped" (err=<nil>)
	I0816 10:45:05.527278    7002 status.go:343] host is not running, skipping remaining checks
	I0816 10:45:05.527284    7002 status.go:257] multinode-685000 status: &{Name:multinode-685000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 10:45:05.527303    7002 status.go:255] checking status of multinode-685000-m02 ...
	I0816 10:45:05.527530    7002 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 10:45:05.527561    7002 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 10:45:05.535863    7002 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53400
	I0816 10:45:05.536187    7002 main.go:141] libmachine: () Calling .GetVersion
	I0816 10:45:05.536520    7002 main.go:141] libmachine: Using API Version  1
	I0816 10:45:05.536537    7002 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 10:45:05.536727    7002 main.go:141] libmachine: () Calling .GetMachineName
	I0816 10:45:05.536833    7002 main.go:141] libmachine: (multinode-685000-m02) Calling .GetState
	I0816 10:45:05.536916    7002 main.go:141] libmachine: (multinode-685000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 10:45:05.537007    7002 main.go:141] libmachine: (multinode-685000-m02) DBG | hyperkit pid from json: 6933
	I0816 10:45:05.537892    7002 main.go:141] libmachine: (multinode-685000-m02) DBG | hyperkit pid 6933 missing from process table
	I0816 10:45:05.537919    7002 status.go:330] multinode-685000-m02 host status = "Stopped" (err=<nil>)
	I0816 10:45:05.537927    7002 status.go:343] host is not running, skipping remaining checks
	I0816 10:45:05.537934    7002 status.go:257] multinode-685000-m02 status: &{Name:multinode-685000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (16.84s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (108.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-685000 --wait=true -v=8 --alsologtostderr --driver=hyperkit 
E0816 10:45:35.632941    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/functional-373000/client.crt: no such file or directory" logger="UnhandledError"
E0816 10:46:15.813813    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/addons-725000/client.crt: no such file or directory" logger="UnhandledError"
E0816 10:46:32.727967    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/addons-725000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-685000 --wait=true -v=8 --alsologtostderr --driver=hyperkit : (1m47.933786553s)
multinode_test.go:382: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-685000 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (108.28s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (43.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-685000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-685000-m02 --driver=hyperkit 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-685000-m02 --driver=hyperkit : exit status 14 (418.804693ms)

                                                
                                                
-- stdout --
	* [multinode-685000-m02] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19461
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1276/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1276/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-685000-m02' is duplicated with machine name 'multinode-685000-m02' in profile 'multinode-685000'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-685000-m03 --driver=hyperkit 
multinode_test.go:472: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-685000-m03 --driver=hyperkit : (37.20036607s)
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-685000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-685000: exit status 80 (263.049305ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-685000 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-685000-m03 already exists in multinode-685000-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-685000-m03
multinode_test.go:484: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-685000-m03: (5.250458633s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (43.19s)

                                                
                                    
x
+
TestPreload (135.06s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-427000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-427000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.24.4: (1m11.092584076s)
preload_test.go:52: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-427000 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-darwin-amd64 -p test-preload-427000 image pull gcr.io/k8s-minikube/busybox: (1.247441139s)
preload_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 stop -p test-preload-427000
preload_test.go:58: (dbg) Done: out/minikube-darwin-amd64 stop -p test-preload-427000: (8.442556434s)
preload_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-427000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperkit 
preload_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-427000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperkit : (48.883032548s)
preload_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-427000 image list
helpers_test.go:175: Cleaning up "test-preload-427000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-427000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-427000: (5.243244586s)
--- PASS: TestPreload (135.06s)

                                                
                                    
x
+
TestSkaffold (113.09s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe3999700343 version
skaffold_test.go:59: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe3999700343 version: (1.704344683s)
skaffold_test.go:63: skaffold version: v2.13.1
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-359000 --memory=2600 --driver=hyperkit 
skaffold_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-359000 --memory=2600 --driver=hyperkit : (38.861739751s)
skaffold_test.go:86: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:105: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe3999700343 run --minikube-profile skaffold-359000 --kube-context skaffold-359000 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe3999700343 run --minikube-profile skaffold-359000 --kube-context skaffold-359000 --status-check=true --port-forward=false --interactive=false: (54.864188287s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-669d78d789-scdmh" [17f29e68-d61d-436a-aaca-2645e0e8bb65] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.006019263s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-6fb9bf5c95-d9bvr" [702ef52b-832f-43fb-982f-576ede762f81] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.004774593s
helpers_test.go:175: Cleaning up "skaffold-359000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-359000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-359000: (5.246386552s)
--- PASS: TestSkaffold (113.09s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (86.94s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.463111259 start -p running-upgrade-976000 --memory=2200 --vm-driver=hyperkit 
version_upgrade_test.go:120: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.463111259 start -p running-upgrade-976000 --memory=2200 --vm-driver=hyperkit : (55.228302708s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 start -p running-upgrade-976000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:130: (dbg) Done: out/minikube-darwin-amd64 start -p running-upgrade-976000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : (25.281370379s)
helpers_test.go:175: Cleaning up "running-upgrade-976000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-976000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-976000: (5.254422175s)
--- PASS: TestRunningBinaryUpgrade (86.94s)

                                                
                                    
x
+
TestKubernetesUpgrade (1324.27s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-185000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:222: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-185000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperkit : (50.781906895s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-185000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-185000: (2.393604436s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-185000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p kubernetes-upgrade-185000 status --format={{.Host}}: exit status 7 (66.963121ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-185000 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=hyperkit 
E0816 11:10:35.670840    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/functional-373000/client.crt: no such file or directory" logger="UnhandledError"
E0816 11:11:32.758733    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/addons-725000/client.crt: no such file or directory" logger="UnhandledError"
E0816 11:11:58.747796    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/functional-373000/client.crt: no such file or directory" logger="UnhandledError"
E0816 11:14:01.875987    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/skaffold-359000/client.crt: no such file or directory" logger="UnhandledError"
E0816 11:15:24.953447    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/skaffold-359000/client.crt: no such file or directory" logger="UnhandledError"
E0816 11:15:35.663708    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/functional-373000/client.crt: no such file or directory" logger="UnhandledError"
E0816 11:16:32.751696    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/addons-725000/client.crt: no such file or directory" logger="UnhandledError"
E0816 11:19:01.919425    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/skaffold-359000/client.crt: no such file or directory" logger="UnhandledError"
E0816 11:19:35.888578    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/addons-725000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:243: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-185000 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=hyperkit : (10m25.432552323s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-185000 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-185000 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperkit 
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-185000 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperkit : exit status 106 (511.155657ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-185000] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19461
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1276/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1276/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-185000
	    minikube start -p kubernetes-upgrade-185000 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1850002 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0, by running:
	    
	    minikube start -p kubernetes-upgrade-185000 --kubernetes-version=v1.31.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-185000 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=hyperkit 
E0816 11:20:35.708090    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/functional-373000/client.crt: no such file or directory" logger="UnhandledError"
E0816 11:21:32.796549    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/addons-725000/client.crt: no such file or directory" logger="UnhandledError"
E0816 11:24:01.916860    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/skaffold-359000/client.crt: no such file or directory" logger="UnhandledError"
E0816 11:25:35.705364    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/functional-373000/client.crt: no such file or directory" logger="UnhandledError"
E0816 11:26:32.793389    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/addons-725000/client.crt: no such file or directory" logger="UnhandledError"
E0816 11:28:38.783744    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/functional-373000/client.crt: no such file or directory" logger="UnhandledError"
E0816 11:29:01.913175    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/skaffold-359000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:275: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-185000 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=hyperkit : (10m39.774072597s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-185000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-185000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p kubernetes-upgrade-185000: (5.256398635s)
--- PASS: TestKubernetesUpgrade (1324.27s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (3.04s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.1 on darwin
- MINIKUBE_LOCATION=19461
- KUBECONFIG=/Users/jenkins/minikube-integration/19461-1276/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1971337345/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1971337345/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1971337345/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1971337345/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Starting "minikube" primary control-plane node in "minikube" cluster
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (3.04s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (6.67s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.1 on darwin
- MINIKUBE_LOCATION=19461
- KUBECONFIG=/Users/jenkins/minikube-integration/19461-1276/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3024682524/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3024682524/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3024682524/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3024682524/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Starting "minikube" primary control-plane node in "minikube" cluster
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (6.67s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.00s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (121.68s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.3844304981 start -p stopped-upgrade-583000 --memory=2200 --vm-driver=hyperkit 
version_upgrade_test.go:183: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.3844304981 start -p stopped-upgrade-583000 --memory=2200 --vm-driver=hyperkit : (37.594230217s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.3844304981 -p stopped-upgrade-583000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.3844304981 -p stopped-upgrade-583000 stop: (8.234105509s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-amd64 start -p stopped-upgrade-583000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit 
E0816 11:31:32.788911    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/addons-725000/client.crt: no such file or directory" logger="UnhandledError"
E0816 11:32:04.991911    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/skaffold-359000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-darwin-amd64 start -p stopped-upgrade-583000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : (1m15.85470229s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (121.68s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.99s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-583000
version_upgrade_test.go:206: (dbg) Done: out/minikube-darwin-amd64 logs -p stopped-upgrade-583000: (2.98654681s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-188000 --no-kubernetes --kubernetes-version=1.20 --driver=hyperkit 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-188000 --no-kubernetes --kubernetes-version=1.20 --driver=hyperkit : exit status 14 (439.051574ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-188000] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19461
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1276/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1276/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (70.69s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-188000 --driver=hyperkit 
no_kubernetes_test.go:95: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-188000 --driver=hyperkit : (1m10.512471864s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-188000 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (70.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-188000 --no-kubernetes --driver=hyperkit 
no_kubernetes_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-188000 --no-kubernetes --driver=hyperkit : (14.945330776s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-188000 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p NoKubernetes-188000 status -o json: exit status 2 (152.783617ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-188000","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-darwin-amd64 delete -p NoKubernetes-188000
no_kubernetes_test.go:124: (dbg) Done: out/minikube-darwin-amd64 delete -p NoKubernetes-188000: (2.375102303s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (19.5s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-188000 --no-kubernetes --driver=hyperkit 
E0816 11:34:01.923673    1831 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1276/.minikube/profiles/skaffold-359000/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:136: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-188000 --no-kubernetes --driver=hyperkit : (19.497749951s)
--- PASS: TestNoKubernetes/serial/Start (19.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-188000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-188000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (155.9816ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.5s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-amd64 stop -p NoKubernetes-188000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-amd64 stop -p NoKubernetes-188000: (2.373422647s)
--- PASS: TestNoKubernetes/serial/Stop (2.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (19.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-188000 --driver=hyperkit 
no_kubernetes_test.go:191: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-188000 --driver=hyperkit : (19.16106731s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (19.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-188000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-188000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (128.035323ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.13s)

                                                
                                    

Test skip (18/215)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard